Your Independent Hewlett Packard Enterprise Technology User Community | Spring 2017 State of the

HPEIoT to work Union with Tata Communications to build world’s largest IoT Network in India

Timothy Chou’s Ten-Year Plan for IT Career Longevity

Infrastructure as Destiny: How Purdue Builds a Support Fabric for Big Data Enabled-IoT

common IT Challenges and How Hyperconvergence is Solving Them

Nigel Upton 3 WW Director/GM IoT, CSB Hewlett Packard Enterprise 2016 XS1 Ad.pdf 1 6/15/2016 3:31:28 PM

XYGATE ® SecurityOne™ Security Intelligence and Analytics for HPE Integrity NonStop™ Servers Visibility

Faster Threat Detection Improved Risk Management Differentiate Noise from Actionable Incidents Minimize the Impact of a Breach by Identifying it in its Early Stages

C

M

Y

CM

MY

CY

CMY

K

Learn more at Reduce Mean xypro.com/SecurityOne Time To Detection

©2016 XYPRO Technology Corporation. All rights reserved. Brands mentioned are trademarks of their respective companies 27 17

31 21 43

Focus On Technology + Community

STATE OF THE IoT UNION EDITOR'S LETTER...... 3 HPE to work with ADVOCACY Tata Communications to Connect Introduces Tech Forums at HPE Discover...... 6 build world’s largest IoT DISCOVER STORAGE NEWS Network in India...... 27 Who Could’ve 'Predicted’ Nimble Timothy Chou’s Ten-Year Storage Flash Acquisition Announcement?...... 9 Plan for IT Career Longevity...... 31 TOP THINKING Three common IT Challenges and How Hyperconvergence is Solving Them...... 11 Security & Privacy on the Internet of Things: We’re Toast...... 13 Infrastructure as Destiny: How Purdue Builds a Support Fabric for Big Data Enabled-IoT...... 17 Connect Converge Staff Futurists Explain Why Technology Will ‘Disappear’ in 2030...... 21 CEO/Chief Executive Officer Kristi Elizondo eCube Systems Interviews VSI’s Brett Cameron on OpenVMS, Open Source and Developer tools...... 35 Editor-In-Chief Stacie Neall Protecting Sensitive Data Event Marketing Manager In and Beyond the Data Lake...... 43 Kelly Luna A Large Financial Institution Migrates Datacenters Art Director with No Downtime Using HPE Shadowbase ZDM...... 47 John Clark Tiered and Policy Based Data Backup...... 51 Partner Relations [email protected] Bring Your Own Things...... 55 Real-world Customers Use HPE Click here to view Connect Board of Directors Storage To Help Drive Business Growth...... 57 Editor’s Letter

“THE BEST WAY TO PREDICT THE FUTURE IS TO INVENT IT.” ­­— Computer scientist, Alan Kay

Welcome to the Spring issue of Connect Converge

Delivering best outcomes for IoT success You have to be on top of your game when it comes to technology acronyms. As we all know, some will come and some will go. But with an estimated 50 billion devices connected by 2020, IoT is the teacher’s pet and will without any doubt be a game changer for every enterprise. In this issue, (Page 27) HPE’s Nigel Upton talks about the dynamic new partnership with Tata Communications and how HPE Universal IoT Platform is driving enterprise value in the IoT revolution. Most of us already know the importance of keeping relevant in our careers, but a good reminder never hurt anyone. Read Janice Reeder- Highleyman’s article on Page 31 and learn what Stanford University’s Dr. Timothy Chou believes is crucial when weighing your career path in technology. If your success is your mission, do not miss HPE Discover Las Vegas this year. Perhaps there is no better place to connect and learn than this venue. Join us in the Connect Community booth for one of many informative Tech Forums or stop by to get better acquainted with your HPE enterprise user group. And as always, if you have a technical how-to or inspiring customer success story to share- please send it our way! Thank you for sharing your technical expertise and your time with your HPE user community.

Stay Connected Managing Editor

@sjneal [email protected]

3 4 Would you bungee jump without knowing it was safe? Then why take chances with your Business Continuity solution? Many business continuity solutions are difficult or even impossible to effectively test. You’ll never know if they work until you really need them. And by then, it’s too late. Eliminate these risks by using an HPE Shadowbase Active/Active or Sizzling-Hot-Takeover business continuity solution. Then, you’ll know for sure that your safety net will work. Don’t cling to the cliff, contact us!

For more information, please see the Gravic white paper: Choosing a Business Continuity Solution to Match Your Business Availability Requirements. ShadowbaseSoftware.com

3 ©2017 Gravic, Inc. All product names mentioned are trademarks of their respective owners. Specifications subject to change without notice. 4 ©2017 Gravic, Inc. All product names mentioned are trademarks of their respective owners. Specifications subject to change without notice. ADVOCACY

Dr. Bill Highleyman Managing Editor, Availability Digest Dr. Bill Highleyman is the Managing Editor of The Availability Digest, a monthly, online publication and a resource of information on high and continuous availability topics. His years of experience in the design and implementation of mission-critical systems have made him a popular seminar speaker and a sought-after technical writer. Dr. Highleyman is a past chairman of ITUG, the former HP NonStop User’s Group, the holder of numerous U.S. patents, the author of Performance Analysis of Transaction Processing Systems, and the co-author of the three volume series, Breaking the Availability Barrier.

Connect Introduces Tech Forums at HPE Discover

onnect Technical Forums deal in depth with specific topics of interest to the attendees of HPE Discover conferences. The forums are a new feature that have C been added both to the annual Discover conference in Las Vegas as well as to the Discover conferences held in Europe or in the UK. They are presented by HPE employees or by customers and/or vendors who are experts in a specific topic. The forums are yet another benefit for the members of Connect, which represents all HPE Technology users. Abstracts of the Technical Forums presented at last year’s Discover conferences in Las Vegas and London can be found below. Be sure to participate in those that Connect has scheduled for conferences in 2017 and beyond. The forums are held in the Connect Community Lounge on the exhibit floors.

Tech Forums at the 2016 Las Vegas Discover Conference Enterprise Networking The “Enterprise Networking” Tech Forum was moderated by Steve Davidek, IT Manager of the City of Sparks, and by Miguel Olague of Summit Partners. Abstract: Aruba, a Hewlett Packard Enterprise company, is a networking vendor selling enterprise wireless LAN and edge access networking equipment. Connect with the experts on what is going on in the Hewlett Packard Enterprise Networking world. Discussions include how ArubaOS, Aruba Wireless Access Points, Aruba AirWave, ClearPass and IMC can help you manage your HPE networking environment. We are also going to discuss Aruba Beacons and ideas on how to use them. Meet other HPE customers in an open roundtable discussion with HPE Networking experts.

5 6 | Spring 2017

Key Management: The “Key” to Successfully Adopting Enterprise Encryption The “Key Management” Tech Forum was presented by Join us! Nathan Turajski and Farshad Ghazi of HPE Security – Data Security. Connect Tech Talks Abstract: For data encryption to be successfully integrated at HPE Discover and deployed with enterprise IT, a best-practice approach to managing encryption keys needs to be an integral part of Las Vegas 2017 the strategy. But not all applications and use cases require the same approach. Join this chalk talk in which HPE Data Security product management experts will discuss the various approaches being adopted today based on the diverse data types, applications and systems requiring high-assurance security.

So You Want to be a CIO? The Tech Forum “So You Want to be a CIO” was presented by Michael Scroggins of Washington State Community and Technical Colleges. Abstract: Connect with the experts and HPE customers during this compelling session. The CIO is a high-risk position. There are many thoughts and much advice related to surviving as a CIO. You’ve got to get there first. This discussion will center on strategies and considerations that you can use to get there. Why would anyone want to be a CIO? It is the best job in the world… if you have what it takes.

Converged Systems The “Converged Systems” Tech Forum was presented by Chris Purcell, HPE Manager, Influencer Marketing. Abstract: Connect with the experts on HPE's Converged System team at this roundtable discussion about Converged and Hyper-Converged Systems. Learn where the data center is heading, how single integrated management is sweeping through the data center, and Connect Members Save $300 hear how customers are successfully deploying different types of key workloads across their infrastructure. on HPE Discover Las Vegas This is a great opportunity to discuss your data center 2017 Registration Here infrastructure questions directly with HPE technologists.

5 6 ADVOCACY

HP-UX The “HP-UX” Tech Forum was presented by Jeff Kyle, Director, HPE MCS Product Management. Abstract: Connect with global enterprise UNIX® peers for a Roundtable discussion about HP-UX, including the future of HP-UX, mission-critical computing, Integrity and beyond. Share best practices and provide feedback and enhancement requests directly to HPE technologists.

AIX to LINUX Migration Best Practices The “AIX to LINUX Migration Best Practices” Tech Forum was moderated by Kyle Todd, Category Sales Manager, HPE Mission Critical Solutions, and Debbie Whitehurst, VP, HPE Data Center Consulting Service Line. Abstract: Connect with the experts at this roundtable discussion on AIX to Linux migration and why HPE platforms are the ideal environment for mission-critical performance at a fraction of the cost. Learn best practices on successful migration implementations and what it means for your bottom line as you journey to an open IT architecture. This is a great opportunity to speak directly with HPE technologists about migration successes or questions. Protecting Underground Facilities with HPE Infrastructure This Tech Forum was presented by Bill Kiger, President and CEO, and Jon DeMoss, Director of Technology, of Pennsylvania One Call System, Inc. Abstract: Pennsylvania 811 (PA1Call) is part of a national network of call centers that monitor underground gas pipelines & electric lines. Their mission is to prevent damage and protect lives with an efficient communications network. In April 2015, a malfunction in the data center’s fire suppression system caused damage to their mission-critical infrastructure. Learn how HPE Technology Services & All Lines Technology diagnosed the issues, developed recommendations, & implemented an upgraded & consolidated Datacenter with improved security and reduced risk.

Tech Forums at the 2016 London Discover Conference Integrated Systems The “Integrated Systems” Tech Forum was moderated by Chris Purcell, HPE Manager, Influencer Marketing. Abstract: Connect with the experts on the HPE Software-Defined and Cloud team at this Roundtable discussion about converged management, hyperconverged appliances and Composable Infrastructure. Learn where the data center of the future is heading, how a simple integrated management experience is sweeping through the data center, and how customers are successfully delivering a cloud experience across their data center infrastructures. This is an excellent opportunity to discuss your data center infrastructure concerns directly with HPE technologists. 7 8 | Spring 2017

Enterprise Networking The “Enterprise Networking” Tech Forum was moderated by personnel from Aruba Networks. Abstract: Aruba, a Hewlett Packard Enterprise company, is a networking vendor selling enterprise wireless LAN and edge access networking equipment. Connect with the experts to learn what is happening in the world of Hewlett Packard Enterprise networking. Discussions will include how ArubaOS and Aruba’s Wireless Access Points, AirWave, ClearPass and Intelligent Management Center can help you manage your HPE networking environment. You also will discuss Aruba Beacons and share ideas on how to use them. Meet other HPE customers in an open Roundtable discussion with HPE networking experts. Enterprise Security The “Enterprise Security” Tech Forum was moderated by Rob Lesan of XYPRO Technology. Abstract: From the data center to the network, big data and social networking, security is now a crucial part of every IT conversation. Connect with the experts at this Roundtable discussion about enterprise security. Share best practices, offer feedback, and speak directly to HPE technologists in a small, intimate setting. HPE Storage The “HPE Storage” Tech Forum was moderated by Calvin Zito of HPE Storage. Abstract: Connect with the experts at this Roundtable discussion about Hewlett Packard Enterprise storage products and solutions, including HPE 3PAR StoreServ, StoreOnce and MSA storage solutions, as well as software-defined storage. Learn how customers of all sizes implement and use HPE storage products. Connect with HPE technologists to provide feedback and enhancement requests directly to HPE. Enterprise Cloud The “Enterprise Cloud” Tech Forum was moderated by Steve Davidek, IT Manager of the City of Sparks. Abstract: Join us for an intimate Roundtable discussion about cloud strategies and solutions. Learn from Hewlett Packard Enterprise customers and technologists, provide feedback directly to HPE, and share best practices with peers from around the world. Connect with the experts to learn how to support your workforce wherever it roams. UNIX (HP-UX) The “UNIX (HP-UX)” Tech Forum was moderated by Jeff Kyle, Director, HPE MCS Product Management, and Ken Surplice, HPE Category Manager. Abstract: Connect with global enterprise UNIX® peers for a Roundtable discussion about HP-UX, including the future of HP-UX, mission-critical computing, HPE Integrity servers and beyond. Share best practices and provide feedback and enhancement requests directly to HPE technologists. 7 8 @CALVINZITO

Who Could’ve 'Predicted’ Calvin Zito HPE Blogger Storage Evangelist Nimble Storage Flash Calvin Zito is a 33 year veteran in the IT industry and has worked in storage for 25 years. He’s been a VMware vExpert for 6 years. As Acquisition Announcement? an early adopter of social media and active in communities, he has blogged for 8 years. his post was originally published on Around You can find his blog at the Storage Block blog on hpe.com. hpe.com/storage/blog A few weeks ago I posted a blog article I He started his “social persona” as HPStorageGuy, and after the HP titled What is Hewlett Packard Enterprise's separation manages an active community of storage fans on TStrategy? The happenings in the world of IT Twitter as @CalvinZito infrastructure are fascinating. One of the comments You can also contact him via email at [email protected] I made in the post was that I think the days of the storage-only vendors are limited. With this news from a leader in both the general purpose and solid state HPE, there will be one less storage-only vendor and HPE Magic Quadrants and swept the #1 spot in all critical Storage undeniably will have the best-in-class storage capabilities for an unprecedented two years running. portfolio from entry to enterprise. I have a post from Bill Philbin, the SVP and GM of Storage and Big Data. Read We made the shift to the all-flash datacenter several on to learn more about the move that HPE is making and years ago as the industry was exiting the first flash what Bill had to say about it. storage wave where performance was the only design. We helped usher in the second wave where the blend Here's the post from Bill Philbin: of performance, economics and data services enabled The background mainstream flash adoption. We’re at the cusp of the third wave of flash where the "new normal" requires deeper I joined HPE a number of years ago because I truly application integration, automation up and down the IT believed that the datacenter was evolving beyond stack, embedded risk management and a futureproof the ability for pure storage vendors to deliver on what approach to managing IT investments. customers wanted. . . call it my own version of predictive analytics. Yet the engineer in me was still obsessed The news with storage-level innovation and leading the next Just like the NASA missions, it’s time for our next technology revolution. Being able to bridge absolute launch to expand to this new frontier. Today, we best-in-class storage with the surrounding technology announced plans to acquire Nimble Storage. When stack has enabled the best of both worlds and is looking at opportunities to complement our existing why I travel over 400,000 miles each year to talk to portfolio, Nimble jumped straight to the top of the customers, engineers, partners and sales teams to find list based on combined business opportunity and out how we can keep improving. similarities in engineering design and culture. Much When HPE acquired 3PAR in 2010 we had a clear like 3PAR started high and then addressed the needs mission – to transform storage within Hewlett Packard of customers pushing down market, our interest in and bring high-end storage built for ITaaS into the Nimble started with an acknowledgement that the flash mainstream enterprise. Over the last 6+ years we’ve market is rapidly evolving and those same needs are invested in R&D and our channel community to moving even lower. Entry and midrange customers are deliver on that mission. . . growing 3PAR to be the #1 demanding the same flash-optimized data services that platform in midrange enterprise storage and one of the their enterprise counterparts have enjoyed for several fastest growing all-flash platforms in the industry. It’s years. However in this space there is also a need for

9 10 incredibly straightforward, simple deployment and advanced data services including hybrid cloud data an expectation for support experience driven by the mobility to prevent lock-in. The NCV approach also consumer interactions we all take for granted on our handily eliminates the cost associated with public cloud smart phones and devices. repatriation of data and opaque SLAs, which are two of The marriage of HPE Storage and Nimble is going to the largest issues customers have expressed around be a powerful force that isolated storage start-ups and storage in the public cloud. overweight IT conglomerates are not prepared to deal It’s been an amazing ride here at HPE and we with. Imagine a best-in-class storage focused business appreciate the trust you’ve given to us when it comes to covering entry to enterprise embedded in the world’s your data – arguably one of your most critical business largest server business with a clear roadmap to hyper- assets. The transformation that we’ve driven as a converged and composable infrastructure. Bookending storage vendor and IT provider over the last several Nimble and 3PAR, we have more than 500,000 entry years has put us in position to help you manage the next MSA and StoreVitual arrays deployed in small sites wave of digital transformation. Today’s announcement where cost is a major driver. On the extreme high- is a critical piece of that puzzle and we look forward to end, we continue to invest in new generations of the bringing you more details as soon as we are able. XP platform with deployments where 14-nines of high availability and mainframe connectivity are ongoing Forward-looking Statements requirements. For anyone looking at next generation This document contains forward-looking statements infrastructure for hybrid IT, we’ve got you covered and within the meaning of the safe harbor provisions are delivering "outside the box" with analytics-driven of the Private Securities Litigation Reform Act of predictive support, extreme automation, with hooks into 1995. Such statements involve risks, uncertainties and the public cloud to help you achieve the right mix of on- assumptions. If such risks or uncertainties materialize prem and off. or such assumptions prove incorrect, the results of HPE Looking east-west, we can cover any storage and its consolidated subsidiaries could differ materially requirement from SMB to Enterprise to Service Provider. from those expressed or implied by such forward- If we turn attention to the north-south axis, we’ve been looking statements and assumptions. All statements building in automation beyond the array into IP and other than statements of historical fact are statements Fibre Channel networks, have embedded security and that could be deemed forward-looking statements, data integrity from host servers through to the array, and including any statements regarding the expected are providing application integrated data protection with benefits and costs of the transaction contemplated by automated data movement across storage systems and this document; the expected timing of the completion sites, through to secondary storage, and even up to the of the transaction; the ability of HPE, its subsidiaries public cloud. and Nimble to complete the transaction considering the various conditions to the transaction, some of Even more possibilities which are outside the parties’ control, including The proposed acquisition announced today opens up those conditions related to regulatory approvals; any even more possibilities. Nimble’s InfoSight Predictive statements of expectation or belief; and any statements Analytics Platform is unlike anything else in the industry. of assumptions underlying any of the foregoing. Risks, Gathering literally tens of millions of data points each uncertainties and assumptions include the possibility day, it enables over 90% of all support cases to be that expected benefits may not materialize as expected; opened automatically. Customers get support without that the transaction may not be timely completed, if even knowing they may be at risk. Not only that, those at all; that, prior to the completion of the transaction, cases are closed with just as much automation, leading Nimble’s business may not perform as expected due to customer satisfaction scores that are through the to transaction-related uncertainty or other factors; roof. We can’t wait to leverage that engine and bring that the parties are unable to successfully implement it to 3PAR and other pieces of the HPE portfolio. The integration strategies; and other risks that are described other new possibility is that of multi-cloud storage with in HPE’s SEC reports, including but not limited to the the recently announced Nimble Cloud Volumes (NCV). risks described in HPE’s Annual Report on Form 10-K for NCV provides block storage services for Compute from its fiscal year ended October 31, 2016. HPE assumes no AWS and Azure that is as easy to use as public cloud obligation and does not intend to update these forward- storage – but with far superior resiliency and more looking statements.

9 10 >> TOP THINKING Three common IT Challenges and How Hyperconvergence is Solving Them

Chris Pucell

elcome to the new normal of “digital disruption.” Like it or not, here you are. It can be a treacherous place for a business to thrive. Digital disruption requires your Wbusiness to turn ideas into value quickly, and to adapt and work efficiently. The trouble is most businesses are being held back by an IT infrastructure that is not designed for the speed today’s businesses demand. Complex manual processes and non-integrated tools fail to provide the simplicity, flexibility, and speed you need to support your current tasks, much less your new ideas and applications. The good news is that IT infrastructure is finally catching up with business needs. I have written past articles about the future of IT and how businesses Chris Purcell has 29+ years of experience working can now operate with cloud-like efficiency, consume with technology within the datacenter. Currently IT services with cloud-like flexibility, and deploy at focused on integrated systems (server, storage, cloud-like speed all in their own datacenters through networking and cloud which come wrapped with hyperconvergence. a complete set of integration consulting and Hyperconverged solutions are innovative, all-in-one integration services.) virtualization solutions that integrate compute, software- You can find Chris on Twitter as @Chrispman01 and defined storage and software-defined intelligence to @HPE_ConvergedDI and his contribution to the allow IT to run at the speed of the business. How do HPE CI blog at www.hpe.com/info/ciblog they do that? One way is by helping businesses solve

11 12 some of the most common datacenter challenges: of FireWhat? “With the reliability and power of the HPE simplicity, flexibility, and speed. You don’t have to Hyper Converged solution, we’re able to deliver new data take my word for it either. Customers from a variety of to our responders in 15 minutes instead of 16 hours.” industries all over the world are putting hyperconverged That’s 64x faster! solutions to work in their datacenters and solving some of the toughest IT challenges out there. Businesses need work faster and smarter Canadian telecommunications and media company, Businesses need to simplify operations Rogers Communications, needed a simple and easily One of India’s leading tile manufacturers, Simpolo scalable solution that would support dynamic and Ceramics, was struggling to configure and manage their unpredictable business requirements. They were new virtual environment efficiently. After extensively specifically looking for a solution that could decrease researching solutions that would fit their growing their time-to-market and shorten the time between business, Simpolo decided on a hyperconverged receiving a request and delivering a VM to the business. solution that offers a consumer-inspired user After learning that their CPU-intensive applications experience, making managing virtual machines (VMs) would be more expensive running on a traditional or simple for IT generalists. The hyperconverged solution converged infrastructure, Rogers Communications allows their users to deploy VMs in just five clicks, decided on a hyperconverged solution. update hardware and firmware in just three clicks, and provide instant diagnostics and analytics to enable By implementing a hyperconverged solution, Rogers faster response to business needs. Communications was able to slash their time-to-market (from two weeks to deliver a new VM down to mere “Compared to a traditional approach, the HPE Hyper minutes) and have enough scalability to develop their Converged solution has been very easy to maintain,” applications. Their hyperconverged solution was quickly explains Niraj Pandit, head of IT for Simpolo Group. “The installed and easily deployed, and gave their users a ease with which we can transfer and manage virtual drastically simplified user experience. servers has been incredible. It has been straightforward to get the team up and running with the solution – no “I wanted a Virtual Machine vending machine and glitches, no special training, and simple to operate.” that’s what the HPE Hyper Converged 380 is. We can now spin out a new virtual machine in just minutes when it used to take at least two weeks,” explains Hani Businesses need to enable their workforce Mousa, Senior manager, IT Infrastructure Architecture, to work from anywhere Rogers Communications. “And the interface is easy, even for inexperienced users. It’s as easy as using a FireWhat? is a Geographic Information Systems smartphone!” (GIS) and technology company dedicated to helping firefighters perform their jobs more safely by equipping By allowing businesses to simplify operations, them with the most up-to-date information possible. work anywhere (in any condition), and work faster, They needed to upgrade their technology trailers to hyperconverged solutions from HPE help IT become an enable faster, more reliable delivery of information in innovator for the business. the field. The FireWhat? team wanted an integrated To learn more about how hyperconverged solutions server, storage, and networking system that could be from HPE can help solve your businesses toughest deployed in any environment imaginable. (Think wild challenges, visit: hpe.com/info/hyperconvergence. fires, hurricanes, even earthquakes!) FireWhat? chose hyperconvergence because it delivers a virtualized infrastructure platform that combines powerful compute resources, highly available Doing IoT Right: storage, and networking connections in a single, 6 Essential Lessons for Business Leaders rackmount form factor. Their hyperconverged solution turns their trailers into mobile, centralized data hubs for Explore real-world IoT examples of what went any disaster command center. well, what didn’t and why.

“Now we’re able to push the availability of our data by DOWNLOAD THE WHITE PAPER bringing more data in faster,” explains Sam Lanier, CEO 11 12 SECURITY & PRIVACY ON THE INTERNET OF THINGS: WE’RE TOAST. KAREN MARTIN

hen John Romkey and Simon Hackett and types of data. Much of the data may seem demonstrated the first Internet innocuous, but researchers have demonstrated connected toaster in 1990¹, they that innocuous data from multiple sources can could not have known how the be combined to reveal sensitive information. Internet of Things (IoT) would take Furthermore, information automatically sent to a third Woff less than 30 years later. Today we are seeing more party may not enjoy legal protection under existing and more devices, from nanny cams to automobiles, privacy laws. communicating over networks. Although technological • Cyber-attacks against IoT controllers can cause progress is notoriously difficult to predict, 30 years of physical damage. Internet experience allows us to make one confident • Some IoT devices are designed without appropriate assertion: if toasters connect to the Internet, they will be security controls, ignoring information security hacked. practices developed as the Internet has evolved. Indeed, some people believe it has already happened. If IoT devices are connected without adequate CBR Online’s 2015 article claiming that hacked smart security, they will pose significant privacy and security toasters were rejecting white bread was plausible risks for individuals and businesses. enough to be shared and repeated on LinkedIn and Facebook, even though it was published on April Fool’s Physical Security Risks Day.² More recently, Andrew McGill set up a fake toaster Automobiles are probably the most familiar component server, and recorded over 300 attempted attacks on the of the IoT. Most new cars have dozens of controllers and toaster the very first day.³ Someone apparently thinks sensors linked into a Controller Area Network (CAN). a toaster is worth hacking, and toasters must be one of This network controls brakes, engines, locks, lights, the least attractive targets on the IoT. and many other functions. Technicians and owners IoT Risks can access the network through a diagnostic port, and network data can be shared wirelessly through the car’s Network users have knowingly risked security and cellular data network. privacy for the advantages of connectivity since the first worms and viruses appeared on the Internet in the In 2010 and 2011, a group of researchers late 1980s. But the nature of the IoT poses new risks demonstrated that flaws in the CAN’s security allow and challenges. IoT is new enough to lack a generally an adversary to compromise many critical automobile accepted definition, but for the purposes of examining systems through a variety of vectors, including the privacy and security, let us say that the IoT consists of diagnostics port and the cellular data network.⁴ They objects equipped with sensors and controllers that are demonstrated remotely disabling the brakes, turning off connected to a network. the lights, and killing the engine of a moving car. • Many of the devices on the IoT have significant Implanted medical devices with wireless capability, constraints – power, processing capability, and such as pacemakers and insulin pumps, are also memory, for example – that affect their security vulnerable to remote attacks. In 2011, Jack Barnaby capabilities. demonstrated that he could wirelessly connect to an insulin pump from a distance of 90 meters, and program • IoT sensors are collecting unprecedented amounts 13 14 the pump to deliver a lethal level of insulin.⁵ In 2013, he stated by Justice Powell in United States v. Miller holds reported that he was able to force any pacemaker within that the Fourth Amendment “does not prohibit the 50 feet to deliver a lethal shock to its owner.⁶ obtaining of information revealed to a third party and In industrial settings, IoT poses a particularly grave conveyed by him to Government authorities, even if the risk, as demonstrated by the Stuxnet worm in 2010 information is revealed on the assumption that it will and a cyber-attack on a German steel mill reported be used only for a limited purpose and the confidence 10 in 2014. The Stuxnet worm spread from system to placed in the third party will not be betrayed.” system, apparently searching for networked controllers Pacemaker data, obtained under warrant, has been running Siemens Step 7 software to handle high-speed cited as evidence supporting an arson charge11, and centrifuges used in Iran’s nuclear enrichment program.⁷ the FBI has apparently been allowed to listen to oral It appeared to be designed to take over centrifuges conversations inside private automobiles.¹² The vast and force them to fail while sending false feedback to amounts and types of data produced by the IoT, in outside controllers to cover the attack. No attacks were conjunction with improved data analytics, pose many actually confirmed. Given the nature of the target, it is privacy concerns. We can expect increased attention possible that any attacks, successful or not, would not to data privacy regulation, but it is yet unclear how be reported. privacy laws may evolve. It is clear, however, that law The German government did report a successful IoT enforcement will use the data for criminal cases, if attack on a German steel mill in December 2014.⁸ The they can, and it is probably safe to assume that cyber- adversary accessed the mill’s corporate network through criminals are looking for ways to exploit IoT data. a phishing attack, and located the information and Quasi-Sensitive Information credentials needed to gain access to the ICS. According Much of the information collected, stored or to the investigators, the adversary clearly understood transmitted by IoT devices seems harmless. Few people the ICS system and the mill’s production process. The would care to protect their toaster data, for example. attack shut down multiple control system components, But it turns out that seemingly harmless data can reveal resulting in massive physical damage to a blast furnace. sensitive information when it is combined with other We are likely to see more cyber-attacks causing pieces of information. physical damage as the IoT expands and adversaries In 2000, Latanya Sweeney demonstrated that the find more vulnerabilities to exploit. Device makers must combination of zip code, gender, and date of birth was weigh security tradeoffs. Toasters and webcams may extremely likely to uniquely identify most US residents.13 not require robust security, but designers of industrial She introduced the term “quasi identifier” to describe or automotive controllers and medical devices need to information that cannot uniquely identify a person by follow appropriate security practices. itself, but can identify a person when it is combined Privacy Issues with other quasi identifiers. She famously demonstrated If you, your home, and your car, and your office are the difficulty of anonymizing data when then-Governor wired with sensors, you are recording an astonishing William Weld defended the State of Massachusetts’ amount of personal information. Some of this information release of anonymized records summarizing every state is obviously sensitive, such as any information recorded employee’s hospital visit. He assured residents that by medical devices, but it may not be protected by patient privacy would be protected. Sweeney combined privacy laws.⁹ Laws vary from country to country, and the medical records with information from publicly from state to state. In the United States, for example, available voter roles, retrieved Governor Welds medical 14 the Health Information Privacy and Accountability Act, records and mailed a copy to his office. only applies to healthcare providers, health plans, and With the IoT, we may need to extend this concept, healthcare clearinghouses. Your doctor must protect perhaps by defining some information as “quasi- your medical records, but the lab posting your paternity sensitive”. If our home is wired with sensors and test results on-line may not have to. controllers, the aggregated data could betray a lot In the United States, if you allow a third party, such as of information. Smart thermostats and locks could FitBit, your cardiologist, or your car’s manufacturer, to reveal whether your house is occupied, and combined collect information from sensors in devices you use, you with data from a smart refrigerator, they might reveal may have no expectation of privacy. Third-Party Doctrine whether or not you have visitors. We may think our

13 14 homes are private, but if we share quasi-sensitive data fail-safe modes, and distribute available patches to all over networks with third parties, we are opening our affected vehicles. The flaws they discovered in 2011 private lives to scrutiny from adversaries, and possibly have been patched, but new vulnerabilities are likely law enforcement. George Orwell’s 1984 looks more to appear, just as they seem to appear on every other prescient every day. networked device. Enterprises face a similar risk from quasi-sensitive If IoT device makers do not learn from the experience information: patterns of activity revealed by sensors of previous generations of networked device designers, and controllers may combine with other types of data to adversaries will continue to exploit well-known weaknesses, reveal sensitive business information. One unexpected and IoT users will suffer from preventable attacks. example, courtesy of Domino’s Pizza, is the “Washington Pizza Index”. A franchise owner in Washington, D.C., has Conclusion observed to the press that unusual patterns in late night IoT devices are collecting vast amounts of data, and pizza orders to federal offices have preceded news of controlling critical systems. Given the advances in data the U.S. invasion of Iraq, the Monica Lewinsky scandal, analytics, any information from any device on your and the flight of Ferdinand and Imelda Marcos from the network, no matter how mundane, might reveal sensitive Philippines.¹⁵ information in combination with other data sources. IoT device makers must consider the security of that A large pizza order may not tell you much by itself, data carefully, and data subjects should make informed but it can be very revealing in combination with other decisions about allowing data to be collected. Device information. If networked IoT sensors and controllers are makers must include security in their designs, avoiding compromised, they could reveal significant information well-known vulnerabilities. Any network is only as about production capabilities and schedules. Building secure as its weakest node. In a house full of networked environmental controls could reveal which departments devices, that could well be your toaster. Maybe that are working overtime. With the IoT, the analytic creativity toaster security upgrade is worth the price. of an adversary appears to be the only limit to the information that can be uncovered. Karen Martin is a San Jose based technical writer with Common Mistakes over a decade of experience in Information Security. Researchers and adversaries seem to discover the same weaknesses over and over again. A 2013 ¹ http://www.livinginternet.com/i/ia_myths_toast.htm retrieved 2/26/2017 IOActive report details widespread key management ² www.cbronline.com/news/internet-of-things/consumer/iot-security-breach- forces-kitchen-devices-to-reject-junk-food-4544884 retrieved 2/23/2017 vulnerabilities in Industrial Automation and Control ³ https://www.theatlantic.com/technology/archive/2016/10/we-built-a-fake-web- Systems.¹⁶ If an adversary can compromise one device toaster-and-it-was-hacked-in-an-hour/505571/ ⁴ Checkoway, Stephen, Damon McCoy, Brian Kantor, Danny Anderson, Hovav on the network, he frequently gains access to all of Shacham, Stefan Savage, Karl Koscher, Alexei Czeskis, Franziska Roesner, and Tadayoshi Kohno. “Comprehensive Experimental Analyses of Automotive Attack them. Many IoT devices are still configured with factory Surfaces.” In USENIX Security Symposium. 2011. ⁵ Burns, A. J., M. Eric Johnson, and Peter Honeyman. “A brief chronology of medical default credentials. The recent Mirai botnet distributed device security.” Communications of the ACM 59, no. 10 (2016): 66-72. denial of service (DDOS) attacks, for example, were ⁶ Stilgherrian (21 October 2011). “Lethal medical device hack taken to next level”. CSO Online (Australia). Retrieved 2 August 2013.; Parmar, Arundhati (1 March launched by compromised IoT devices, mostly video 2012). “Hacker shows off vulnerabilities of wireless insulin pumps”. MedCity News. Retrieved 7 August 2013.; cameras, DVRs and routers, that were all using factory ⁷ Kushner, David. “The real story of stuxnet.” IEEE Spectrum 3, no. 50 (2013): 48-53. ⁸ Lee, Robert M., Michael J. Assante, and Tim Conway. “German steel mill cyber- default usernames and passwords.¹⁷ The German steel attack.” Industrial Control Systems 30 (2014). mill cyber-attack used phishing, the most common ⁹ Ornstein, Charles, “Federal Privacy Law Lags Far Behind Personal Health Technologies”, Washington Post, Nov 17, 2015, https://www.washingtonpost.com/ social engineering attack, to get inside the corporate news/to-your-health/wp/2015/11/17/federal-privacy-law-lags-far-behind-personal- health-technologies, retrieved 2/28/2017 network. There were no controls capable of preventing 10 United States v. Miller, 425 US 435 - Supreme Court 1976 11 Mole, Beth, “Ohio Man’s Pacemaker Data Betrays Him in Arson Insurance Fraud the adversary from accessing the ICS from the corporate Case, from https://arstechnica.com/science/2017/02/ohio-mans-pacemaker-data- network. The 2011 automobile hack succeeded because betrays-him-in-arson-insurance-fraud-case/ retrieved 2/28/2017 12 Fox-Brewster, Thomas, “Cartapping – How Feds Have Spied on Connected Cars of unenforced access control and weak or unenforced for 15 Years”, www.forbes.com, 1/15/2017, retrieved 3/7/2017 13 Sweeney, Latanya. “Simple demographics often identify people uniquely.” Health protections of the diagnostic services. (San Francisco) 671 (2000): 1-34. 14 Ohm, Paul. “Broken promises of privacy: Responding to the surprising failure of The team behind the automobile hack made anonymization.” (2009). 15 http://articles.chicagotribune.com/1991-01-16/news/9101050225_1_frank- recommendations that ought to sound eerily familiar to meeks-pizza-pentagon and http://www.washingtonpost.com/wp-srv/politics/special/ clinton/stories/pizza121998.htm anyone with Internet security experience: implement 16 Apa, Lucas, and Carlos Mario Penagos. “Compromising industrial facilities from 40 miles away.” ser. BlackHat (2013). access control, scan for anomalous behavior, establish 17 https://krebsonsecurity.com/tag/mirai-botnet/

15 16 15 16 by BriefingsDirect’s Dana Gardner, Principal Analyst at Interarbor Solutions. Here are some excerpts: Gardner: We hear an awful lot about digital disruption in other industries. Is there digital disruption going on at university campuses as well, and how would you describe that? McCartney: A university, you can think of as consisting of three main lines of business, two of which are our core activities, of teaching students, educating students; and then producing new knowledge or doing research. The third is the business of running that business, and how do you do that. A very large infrastructure is built up around that third leg, for a variety of reasons. But if we look at the first two, research in particular, which is where we started, this concept of the third Infrastructure leg of science has been around for some time now. It used to be just experimentation and theory creations. as Destiny: You create a theory, then you do an experiment with some test tubes or something like this, or grow a crop How Purdue Builds in the field. Then, you would refine your theory and you would continue in that kind of dyadic mode of just a Support Fabric for going backwards and forwards. That was all right until we wanted to crash lorries into Big Data Enabled-IoT walls or to fly a probe into the sun. You don’t get to do that a thousand times, because you can’t afford it, or it’s Dana Gardner too big or too small. Simulation has now become what we refer to as the third leg of science.

he next BriefingsDirect Slightly more than 35 percent of our actual TVoice of the Customer research now uses high-performance computing digital business transformation in some key parts of it to produce results, then case study examines how shape the theory formulation, and the actual Purdue University in Indiana experimentation, which obviously still goes on. has created a strategic IT Around teaching, we’ve seen for-profit environment to support universities, and we’ve seen massive open online dynamic workload requirements. courses (MOOCs) more recently. There’s a strong We recently sat down Dana Gardner sense that the current mode of instructional delivery cannot stay the same as it has been for the last with Gerry McCartney, Analyst Dana Gardner hosts hundreds of years and that it’s ripe for reform. Chief Information Officer at conversations with the doers Purdue, to learn how Purdue and innovators—data scientists, Indeed, my boss at Purdue, Mitch Daniels, extended an R&D support developers, IT operations would be a clear and vibrant voice in that debate infrastructure to provide a managers, chief information himself. To go back to my earlier comments, common and increasingly security officers, and startup our job there is to be able to provide credible software-defined approach founders—who use technology alternatives, credible solutions to ideas as to support myriad types to improve the way we live, they emerge. We still haven’t figured that out of demands by end users work, and play. View an archive collectively as an industry, but that’s something and departments. The of his regular podcasts. that is in the forefront of a lot of peoples’ minds. discussion was moderated

17 18 Gardner: Suffice to say that information technology term, but that can be research information, instructional will play a major role in that, whatever it is. information, or just regular bookkeeping information. McCartney: It’s hard to imagine a solution that isn’t When you come into a room of a new solution, you’re actually completely dependent upon information immediately looking at the exit door. In other words, when I technology, for at least its delivery, and maybe for more have to leave, how easy, difficult, or expensive is it going to than that. be to extract my information back from the solution? Gardner: So, high-performance computing is a That drives a huge part of any consideration, whether bedrock for the simulations needed in modern research. it’s cloud or on-prem or whether it’s proprietary or open Has that provided you with a good stepping-stone code solution. When this product dies, the company toward more cloud-based distributed computing-based goes bust, we lose interest in it, or whatever, how easy, fabric, and ultimately composable infrastructure-based expensive, difficult is it for me to extract my business environment? data back from that environment, because I am going to need to do that? McCartney: Indeed it has. I can go back maybe seven or eight years at our place, and we had close to Gardner: Tell us about the unique requirements in 70 data centers on our campus. And by a data center, a university environment where you need to provide I mean a room with at least 200-amp supply, and at a common, maybe centrally-managed approach, to IT least 30 tons of additional cooling, not just a room for cost and security and manageability, but also see that happens to have some computers in it. I couldn’t to the unique concerns and requirements of individual possibly count how many of them there are now, Those stakeholders? stand-alone data centers are almost all gone now, McCartney: All universities are, as they should be, thanks to our community cluster program, and the long full of self-consciously very smart people who are all game is that we probably won’t have much hardware on convinced they could do a job, any particular job, better our campus at some point a few years from now. than the incumbent is doing it. Having said that, the vast Right now, our principal requirement is around bulk of them have very little interest in anything to do research computing, because we have to put the with infrastructure. storage close to the compute. That’s just a requirement The way this plays out is that the central IT group of the technology. provides the core base that services the network -- the In fact, many of our administrative services right wireless services, base storage, base compute, things now are provided by cloud providers. Our users are like that. As you move to the edge, the things that make completely oblivious to that, but we have no on- a difference at the edge. premises solution at all. We’re not doing travel, expense In other words, if you have a unique electrical reimbursement and a variety of back-office things on our device that you want to plug in to a socket in the campus at all. wall because you are in paleontology, cell biology, or That trend is going to continue, and the forcing organic chemistry, that’s fine. You don’t need your own function there is that I can’t spend enough on security to electricity generating plants to do that. I can provide you protect all the assets I have. So, rather than spend even with the electricity. You just need the cute device and more on security and fail to provide that completely you can do your business, and everybody is happy. secure environment, it’s better to go to somebody who Whatever the IT equivalent to that is, I want to be the can provide that environment. energy supplier. Then, you have your device at the edge Gardner: What sort of an infrastructure software that makes a difference for you. environment do you think will give you that opportunity You don’t have to worry about to make the right choices when you decide on-prem the electricity working; it’s just versus cloud, even for those intensive workloads that there. I go back to that phrase require a data and compute tight link? “operational credibility.” Are we genuinely surprised when the McCartney: The worry for any CIO is that the only service doesn’t work? That’s thing I have that’s mine is my business data. Anything what credibility means. else -- web services, network services -- I can buy from a vendor. What nobody else can provide me are my Gardner: So, to me, that really actual accounts, if you wish to just choose a business starts to mean IT as a service, Gerry McCartney 17 18 not just electricity or compute storage. It’s really the How impactful will that be on how you can manage your function of IT. Is that in line with your thinking, and how campus, not only for student retention, but perhaps for would you best describe IT as a service? other aspects of a smarter intelligent campus opportunity? McCartney: I think that’s exactly right, Dana. There McCartney: One of the great attractions of are two components to this. There’s an operational small educational institutions is that you get a lot component, which is, are you a credible provider of of personalized attention. The constraint of a small whatever the institution decides the services are that it institution is that you have very little choice. There’s a needs, lighting, air-conditioning or the IT equivalence of small number of faculty, and they simply can’t offer the that? They just work. They work at reasonable cost; it’s options and different concentrations that you get in a all good. That’s the operational component. large institution. The difference with IT, as opposed to other In a large institution, you have the exact opposite infrastructure components, is that IT has itself the problem. You have many, many choices, perhaps even capability to transform entire processes. That’s not true too many subjects that, as a 19-year-old, you’ve never of other infrastructure things. I can take an IT process even heard of. Perhaps you get less individualized and completely reengineer something that’s important attention and you fill that gap by taking advice from to me, using advantages that the technology gives me. students who went to your high school a year before, For example, I might be concerned about student who are people in your residence hall, or people you performance in particular programs. I can use geo- bump into on the street. The knowledge that you location data about their movement. I can use network acquire there is accidental, opportunistic, and not activity. I can use a variety of other resources available structured in any way around you as an individual, but to me to help in the guidance of those students on it’s better than nothing. what’s good behavior and what’s helpful behavior to an There are advisors, of course, and there are people, outcome that they want. You can’t do that with an air- but you don’t know these individuals. You have to go conditioning system. and form relationships with them and they have to IT has that capability to reinvent itself and reinvent understand you and you have to understand them. entire processes. You mentioned some of them the A big-data opportunity here is to be able to look at way that things like Uber has entirely disrupted the taxi the students at some level of individuality. “Look, this industry. I’d say the same thing here. is your past, this is what you have done, this is what There’s one part of the CIO’s job that’s operational; you think, and this is the behavior that we are not sure does everything work? The second part is, if we’re in you’re engaging in right now. Have you thought about transition period to a new business model, how involved this path, have you thought about this kind of behavior are the IT leaders in your group in that discussion? It’s for yourself?” not just can we do this with IT or not, but it’s more can A well-established principle in student services is that a CIO and the CIO’s staff bring an imagination to the the best indicator of student success is how engaged conversation, that is a different perspective than other they are in the institution. There are many surrogate voices in the organization? That’s true of any industry or measures of that, like whether they participate in clubs. line of business. Do they go home every weekend, indicating they are not Are you merely there as a handmaiden waiting to be told really engaged, that they haven’t made that transition? what to do, or are you an active partner in the conversation? Independent of your academic ability, your SAT scores, Are you a business partner? I know that’s a phrase people and your GPA that you got in high school, for students that like to use. There’s a kind of a great divide there. engage, that behavior is highly correlated with success and good outcomes, the outcomes everybody wants. Gardner: I can see where IT is a disruptor and it’s also a solution to the disruptor, but that solution might further As an institution, how do you advise or counsel. They’ll disrupt things. So, it’s really an interesting period. Tell me a say perhaps there’s nothing here they’re interested little bit more about this concept of student retention using in, and that can be a problem with a small institution. new technologies -- geolocation for example -- as well as It’s very intimate. Everybody says, “Dana, we can see big data which has become more available at much lower you’re not having a great time. Would you like to join cost. You might even think of analytics as a service as the chess club or the drafts club?” And you say, “Well, I another component of IT as a service. was looking for the Legion of Doom Club, and you don’t seem to have one here.” 19 20 Well, you go to a large institution, they probably have different choices that different people have made. Yes, two of those things, but how would you find it and how you can be the first person to make a brand new choice, would you even know to look for that? How would you and good for you if you are. discover new things that you didn’t even know you liked, Gardner: What comes next – high performance because the high school you went to didn’t teach applied computing (HPC), fabric cloud, IT-as-a service -- is there engineering or a whole pile of other things, for that matter. another chapter on this journey that perhaps you have a Gardner: It’s interesting when you look at it that way. bead on that that we’re not aware of? The student retention equation is, in a business sense, McCartney: Oh my goodness, yes. We have an the equivalent of user experience, personalization, event now that I started three years ago called “Dawn engagement, share of wallet, those sorts of metrics. or Doom,” in which if technology is a forcing function, We have the opportunity now, probably for the if it is. We’re not even going to assert that definitely. first time, to use big data, Internet of Things (IoT), Are we reaching a point of a new nirvana, a new and analytics to measure, predict, and intercede at a human paradise where we’ve resolved all major social behavioral level. So in this case, to make somebody a problems, and health problems or have we created productive member of society at a capacity they might some new seventh circle of hell where it’s actually miss and you only have one or two chances at that, an unmitigated disaster for almost everybody; if not seems like a rather monumental opportunity. everybody? Is this the end of life as we know it? We McCartney: You’re exactly right, Dana. I’m not sure I create robots that are superior to us in every way and like the equivalence with a customer, but I get the point we become just some intermediate form of life that has that you’re making there. What you’re trying to do is to reached the end of its cycle. genuinely help students discover an effective path for This is an annual event that’s free and open. Anybody themselves and learn that. You can learn it randomly, who wants to come is very welcome to attend. You can and that’s nice. We don’t want to create this kind of Google “Dawn or Doom Purdue.” We look at it from all railroad track. Well, you’re here; you’ve got to end up different perspectives. So, we have obviously engineers over there. That’s not helpful either. and computer scientists, but we have psychologists, we My own experience, and I don’t know about other have labor economists. What about the future of work? If people listening to this, is that you have remarkably nobody has a job, is that a blessing or a curse? little information when you’re making these choices at Psychologists, philosophers, what does it mean, what 19 and 20. Usually, if you were getting direction, it was does artificial intelligence mean, what does a self- from somebody who had a plan for you that was more conscious machine mean? Currently, of course, we have based on their experience of life, some 20 or 30 years things like food security we worry about. And the Zika previously than on your experience of life. virus -- are we spawning a whole new set of viruses So where big data can be a very effective play here, we have no cure for? Have we reached the end of the was to say, “Look, here are people that look like you, effectiveness of antibiotics or not? and here were the choices they’ve made. You might find These are all incredibly interesting questions I would some of these choices interesting. If you might, then think any intelligent person would want to at least probe here’s how you’d go about exploring that.” around, and we’ve had some significant success with As you rightly say, and implicitly suggested, there is that. a concern with the high costs, especially of residential Gardner: When is the next Dawn or Doom event, and education, right now. The most wasteful expenditures where will it be? there are is where you do a year or two to find out you McCartney: It would be in West Lafayette, Indiana, on shouldn’t have ever been in this program, you have no October 3 and 4. We have a number of external high- love for this thing, you have no affinity for it. profile key note speakers, then we have a passel of The sooner you can find that out for yourself and Purdue faculty. So, you will find something that entertain make a conscious choice the better. We see big data even the most arcane of interests. [For more on Dawn having a very active role in that because one of the or Doom, see: Dawn or Doom: The Risks and Rewards of great advantages of being in a large institution is that Emerging Technologies.] we have tens of thousands of students over many years. We know what those outcomes look like, and we know

19 20 Futurists Explain Why Technology Will ‘Disappear’ in 2030 Learn which technologies will completely transform and BY ATLANTIC RE:THINK which will completely disappear as IoT continues to expand

ens of billions of devices, sensors, vehicles and people will become interconnected over the next 10 to 15 years as the so-called Internet of T Things (IoT) expands from about 11 billion connections today, to 30 billion by 2020, to 80 billion by 2025. And in fact, those estimates may prove low. But the good news is, estimates will become increasingly easier to make. “In the future, we’ll be able to better predict the future,” said Tom Bradicich, vice president and general manager of Servers and Internet of Things Systems for Hewlett Packard Enterprise. Algorithms will build on algorithms, with every prediction smarter than the last. Bradicich isn’t alone in his opinion on how interconnectedness will transform transportation. We spoke to a number of futurists from across industries who forecasted how the Internet of Things will help shape what many are calling the next Industrial Revolution. And be prepared, these predictions are sufficiently bold.

Driver’s licenses will not exist—you won’t need “In the future, 1one, because you won’t be driving we’ll be able to Through the first half of 2016, road fatalities in the U.S. climbed more than 10 better predict percent compared with the same period last year, which saw an overall spike in traffic deaths of more than seven percent, the biggest increase in almost a half the future.” century. Tens of thousands die every year. In 10 or 15 years, people who own cars will be thought of the same way as people today who own their own planes: They must have too much money or be obsessed “In the future, hobbyists or both. Whether you want to drive or not—even as a hobby—the decision it will likely be might not be up to you. illegal to drive “It’s possible it will be illegal to drive a car,” said Bradicich. Driverless cars could quickly result in, say, 90 percent fewer accidents, at which point we’ll start a car.” hearing, “I don’t want my neighbor down the road driving because he’s possibly nine times more likely to hit me than an autonomous vehicle,” Bradicich added. In interviews with more than a dozen prominent futurists, academics and consultants on the effects of IoT, it was unanimous that people will drive less, and everyone will benefit. “Losing 30,000 people a year is really unacceptable, but we live with that,” said Cindy Frewen, a professor and board chair of the Association of Professional Futurists. “We’ve become numb to that fact, yet we take that risk every day. We talk about other things that are high risk, and they are, but cars are one we really don’t talk about.” Humans, historically, have proven they aren’t the best at understanding or mitigating risks. That will quickly change, and it will be one of the single greatest

21 22 benefits of IoT, according to Marti Ryan, a consultant and the former CEO of Telematic, a cloud-based platform that provides auto insurance. “Personalization “Humans aren’t is what I see coming with IoT, and by that I mean personalized, location-based, the best at just-in-time marketing, and personalized risk profiling,” she said. “You’ll be paying for the risks that you choose to take. I don’t know if it’ll be on a per-second, per- understanding minute or per-day basis, but we’ll get to a point where when we make choices, or mitigating we’ll pay for those choices.” risks.” In other words, people who drive only sporadically won’t need full time auto coverage. Being conscious of more decisions means people, ultimately, will begin making better decisions. Insurance will be peer-to-peer, spreading by word of mouth, and people will team up to pool their own collective risks and save money. That shared data will lead to even greater efficiencies, as people will be able to collect and sell their own data, Ryan said. “I’ll own all of my own data and get to shop my own data for insurance or financial services,” she said. “Companies that realize it’s all about the consumer and put the power of the data back in the consumer’s hands will rise to the top. The more companies take our data and do something useful, the more we’ll be conscious that our data is being shared, and the more we’ll want to share.”

2Data will become more like currency Ryan wants her life optimized. She wants a device to explicitly tell her things including how much time she should spend looking at a screen or going to a museum or exercising on a given day. Exactly how beneficial, from a health standpoint, would an extra 20 minutes of jogging be on a given day? “Provide some value to me,” Ryan said. “Save me time or money, and continue to do that. Don’t be a bad steward of my data, and I’ll continue to give it to you.” Christopher Bishop, a board member of Teach the Future and TEDx “In the future, TimesSquare who spent 15 years at IBM, agrees that data will become more of a currency. “People will want to buy your data for a survey or to participate in a no one will focus group,” he said. “There will be chips that have all that data stored. You’ll be an organ monitor and manage what goes in there.” But speaking of currency, futurists say donor.” you also won’t use cash.

3People will visit doctors less often For the day-to-day health needs, sensors on your clothing will monitor your vitals and provide constant biometric updates—eliminating the need for annual in-person check-ups. For more extreme healthcare needs, futurists say that in 2025 or 2030, your driver’s license will no longer say whether you’re an organ donor, because no one will be an organ donor anymore. Imperceptibly fast connections between 3D printing devices and medical data repositories will build new organs on demand, possibly before those who need them even know that they need them. A microchip in a device on that person’s arm or ear—or in that person’s arm or ear—may buzz to alert him or her, similar to receiving an email or text today, that it’s time to swap out a kidney with a new one perfectly crafted for their body, by their body. And it will be replaced with one that can’t possibly be rejected. 21 22

4Food waste will be nearly eliminated Sensors will be everywhere, even on our food. Edible sensors will prevent spoilage and optimize global food deliveries, Bradicich said. “We’ll be able to know where there’s food shortages,” he said. “When food can have sensors on it, its spoilage rate can drop. If you’re shipping it, it can be routed in a way to prevent fruit from spoiling.” We’ll also be able to simply grow more food, a tremendous boon for farmers in developing nations, especially given the potential effects of climate changes, said Christopher Kent, a partner and founder of Foresight Alliance. “There’s a scientist who’s figuring out a way to print paper sensors that you can plant in the soil to measure alkalinity, salinity and how much water there is,” Kent said. “That may not be life changing for farmers in the developed world, but it’s also a huge development for farmers in developing countries.” Printable sensors will be cheap and also tell farmers optimal times to plant optimal quantities of optimal crops, with up-to-the-nanosecond climate forecasts.

Money saved from energy efficiencies will be 5 used to revamp urban infrastructure IoT will be crucial in urban areas, too, as it will save large companies billions, if not trillions, of dollars. “You’re going to see amazing savings in commercial buildings,” said Lee Mottern, a member of the Association of Professional Futurists and former Defense Department civilian intelligence analyst. “They’ll save a fortune with smart buildings. I do see a lot of adoption in industry. It cuts manpower, it cuts energy costs and you can even disengage the building from the grid,” similar to unplugging an unused charging device from a wall socket, saving additional electricity. Much of the power that’s saved will be invested in building and maintaining infrastructure networks that most people haven’t yet imagined. Cars can’t drive themselves without smart roads and smart signs and smart intersections. Paying for that infrastructure will prove more challenging. With fewer people driving—both because of driverless transportation and with so many more working from home on high-speed connections—municipalities won’t get money from traffic tickets. People may not buy as much locally, crushing the tax base. If more people share housing or don’t buy homes, or move around more frequently, local revenues will plummet and many legacy organizational structures found in cities will crumble from lack of funds. Governments will have to think of other ways to provide goods or services, perhaps through private- public partnerships. Maybe that partnership will focus on creating the necessary set of protocols for everything to talk to everything else simultaneously. “Standards and policies are going to be critical to making that all work as the deluge of data continues,” Bishop said.

“Devices may reach a point where they only respond to our personal DNA.”

23 24

6 Cyber attackers will be more motivated than ever Security of IoT also will be critically important. Several futurists interviewed said devices may reach a point where they only respond to us—perhaps requiring a constant, pulsing connection to our DNA—but many were concerned that hackers will have more incentives than ever before to break down those barriers. One recent hack attack using baby monitors, connected cameras and home routers took down several major web sites, an innovative exploit the Department of Homeland Security had anticipated with a warning just a week before. “One thing you’re already seeing is that the IT security consulting industry is growing pretty significantly, and that’ll continue to happen,” said David Stehlik, a consultant, professor and certified ethical hacker. “The successful players in the next 10 years are going to be the ones who’ve already established the framework within their own organizations to manage those issues and their brand going forward. Those who create the standards will have a leg up.”

Those high-tech contact 7 lenses in sci-fi movies will exist Dolan said he probably favors personalized earpieces to watches or implanted sensors. Others, including Bradicich, think all of our personal data will be implanted in contact lenses that will give us the supreme combination of security and functionality. “Your entire smartphone will be in contact lenses,” he said. The fluids in our eyes, and perhaps in nearby blood vessels, will feed data back into the lenses to give us constant health updates. The lenses may even provide personal security, assuming we don’t already have sufficient sensors in our clothing “like those extra buttons tacked onto the inside of a dress shirt,” he said.

“In the future, there won’t be communication

barriers. Translations will happen as we talk.”

8 We’ll create robots that create robots These devices will have to be created by people—or by robots created by people, at least until robots are smart enough to create manufacturing robots without human input. That might not be far off. The entire nature of work will be transformed through IoT, and maybe John Maynard Keynes’ decades-old prediction of a 15-hour workweek will finally come true. The best jobs of the IoT era don’t yet exist, just as many of the best jobs of the ‘90s didn’t exist in the ‘60s. According to a survey conducted by Visual Capitalist, people in the future will make a living working as neuro-implant technicians, virtual-reality designers and 3D-printing engineers.

23 24

9 Humans will be able to do more good Life will be less about stuff, even though all of your stuff will be constantly talking to all of the other stuff. People will be living longer—perhaps, according to some surveyed, 150 or 200 years—and spending more of their newfound free time helping others, Bishop said. “I think we’re going to see increased leisure time, but it’ll allow for the addressing of more global problems as global awareness increases,” he said. “With the tremendous value and wealth created by these companies, we’re going to see Gen-Z even more focused on the common good and corporate sustainability.” That focus will increase as everything gets faster and easier because of IoT. There won’t be communication barriers because different languages will be instantly translated as we talk. People otherwise too young or old or infirm to operate a vehicle will get anywhere they want safely and quickly. Your clothing will talk to your refrigerator after you eat half a pizza for lunch and perhaps the two devices will politely suggest having a salad for dinner. Evidence from a crime scene will be collected and recorded automatically. In fact, most crimes could plummet. More people will be doing things they actually want to do. “One thing IoT does is increase the opportunity for individuals to be more self-reliant,” said Unique Visions’ Joe Tankersley, a 20-year Disney veteran and imagineer. “I could run a craft factory in my own garage. We’ll have new entrepreneurship because of that.” Similarly, low barriers to entry will result in more people growing their own food. “As humans,” he said, “we tend to overvalue our unique contributions, but it doesn’t mean that we won’t see a situation where, for instance, as people prefer craft beer to beer brewed by a giant corporation, you might see a future where people prefer a product made by a human vs. automation, because it’s human made. For everyday items, we’ll care less and less.”

10Technology will become invisible Tankersley thinks we’ll reach a point where we’ll be surrounded by so much technology we’ll then be surrounded by none. “The ultimate goal of all this technology is for it to disappear,” he said. “The only people who really think a cell phone is a good form factor are the people who manufacture cell phones. You’ve got to carry it, it’s bulky, it breaks. People seem to think that these devices are what we’re obsessed with—we’re really not. We’re obsessed with the fact that these devices provide a new kind of connection for us. And if we can make that connection less obtrusive, why wouldn’t we?”

Previously published in HPE Matter. Read the latest issue here.

25 26 25 26 State of the IoT Union HPE to work with Tata Communications to build world’s largest IoT Network in India

Nigel Upton Director & General Manager IoT/GCP Communications Solutions Business Hewlett Packard Enterprise

27 28 nce the subject of sci-fi novels, smart The sheer size of this project is incredible, bringing cities are now a global phenomenon. new services to millions of people. Through our partner- From implementing smart transportation centric approach, the HPE Universal IoT Platform will systems to providing contextual, location- enable Tata Communications to build multiple vertical based services to their citizens, cities are use cases for its IoT network in India on a common Obecoming more and more intelligent. Of course, different platform with a common data model. cities have very different needs and challenges, but So let’s look in detail at what’s fueling this burgeoning many are trying to solve the same problems and want interest in making cities smart? I see three key drivers: to improve the quality of life for their inhabitants and increase overall efficiency and safety. Major city centers are exploding Let me illustrate this reality with very concrete example. In February at Mobile World Congress 2017 in size, scale, and complexity HPE announced it is working with Tata Communications Readers living in densely populated areas can to support the roll-out of India’s first LoRaWAN™ (LoRa) probably relate to problems such as insufferable traffic, based network. It is part of Tata Communications’ lack of street parking, and overflowing waste bins. And long-term strategy of creating mobile platforms and it will get worse: By 2050, 66% of the global population ecosystems that enable its customers and partners to will live in urban areas.¹ IoT solutions can help cities connect people and Internet of Things (IoT)-connected become more efficient and sustainable as demand rises. devices seamlessly on a global scale. The first phase of the roll-out targets will reach over 400 million people across India. The speed and availability of technology is pushing cities to The association between Tata Communications and Hewlett Packard Enterprises (HPE) paves the way become more intelligent. for a new era in enabling devices with embedded The growing ubiquity of sensors, connectivity for Enterprise customer solutions applications, and mobility is helping throughout the country. The project involves connecting cities connect and, more importantly, enabling them to devices, applications and other IoT solutions over the collect and analyze data to produce insights never before LoRa network in smart buildings, campus, utilities, possible. That translates to uncovering faster, better, and fleet management, security and healthcare services in more cost-effective ways to deliver public services. nearly 2,000 communities, making it the first-of-its-kind initiative in India. Smart cities will be the engine Tata Communications has 15 years of experience in of global economic growth. delivering impactful and innovative communications solutions to its customers globally,” said Anthony By 2025, the top 600 cities in the world will generate 2 Bartolo, president, Mobility, IoT and Collaboration nearly 65% of world economic growth. A city that wants Services, Tata Communications. “As part of our to become a leading player will need a competitive commitment to innovation and in driving digital advantage. Delivering public services efficiently will be transformation globally, we are creating a cohesive, critical as city governments’ work to improve the quality resilient and highly secure network to deploy IoT of life for residents—and technology is the catalyst for applications in India. We are excited to be partnering this continued transformation. with HPE in this project as this platform is critical to IoT Platform and Smart Cities amalgamating all the complex variables in enabling a truly digital India. It’s important to remember that cities are heterogeneous environments—many have diverse Tata Communications has also selected HPE to be “ economies, technical capacities, and connectivity an integral part of its global cellular IoT connectivity requirements. Different public services require services. This provides a range of domestic and cross- and consume different IT resources, which border IoT connectivity and management services, causes these services to be operated in silos. particularly for applications requiring elements of mobility, such as connected cars, fleet management and For example, some smart city initiatives may require transportation services. different types of networks (Wi-Fi, 3G/4G, LPWAN

27 28 such as LoRa, etc.) to connect a myriad of sensors like smoke detectors, water flow gauges, electricity meters, and A key goal of a smart city is to motion detectors. With this degree of diversity, cities need a enhance the use of public resources, platform capable of unifying and streamlining the management increasing the quality of services of varying types of protocols, devices, and networks. offered to its citizens while reducing operational costs. While this objective This is where the HPE Universal IoT Platform comes cannot be achieved with technology in. This vendor-agnostic platform makes it easy for cities alone, leveraging the deployment of to add and manage new public services. It simplifies the IoT within a city can go a long way to management of various connected devices—as well as the reaching this goal. different types of networks that connect them—regardless of their nature. More importantly, the Universal IoT Platform creates a collaborative environment for separate applications to access and provide data for analysis—thus deriving valuable insights that otherwise would not be possible.

Designed for massive scale, multi-vendor and multi- network support using the oneM2M interoperability standard, the HPE Universal IoT Platform streamlines interoperability and management of heterogeneous IoT devices and applications that power the intelligent edge. The platform supports long-range, low-power connectivity deployments, as well as devices that use cellular, radio, Wi- Fi and Bluetooth. Recent enhancements announced for the HPE Universal IoT Platform include increased LoRa gateway The value of the IoT lies in enriching support, enabling the use of multiple LoRa gateways with a data collected from devices common set of applications to simplify device provisioning with analytics and exposing it to and control in heterogeneous LoRa environments. applications that enable organizations Here are two areas where the HPE Universal IoT Platform to derive business value. The HPE can help cities transition into smarter ones: Universal IoT Platform dramatically simplifies integrating diverse devices Smart parking with different communications protocols, enabling customers to These days, you’re lucky if you only realize tremendous benefits from their have to drive around for 20 minutes to IoT data, and is designed to scale find a parking spot. The result is frustrated to billions of transactions tried and citizens, snarled traffic, and increased CO2 emissions. tested in rigorous large scale Global Fortunately, IoT promises to change all that. Using real-time Telco and Enterprise environments in traffic information systems and smart parking meters that a variety of smart ecosystems. are able to detect if the parking space is occupied, cities across Europe and North America are reducing the time and cost of transportation and providing a healthier way of life by providing a real-time view of parking availability.

Intelligent waste management For ages, garbage trucks in cities around the world have traveled the same routes daily, collecting waste whether a container is full or not. However, IoT can optimize this process by installing connected sensors in waste bins to monitor the level of rubbish inside. By emptying containers only 29 30 when they are full, cities can realize massive operational savings. That’s critical given that waste management is Smart cities are urban areas typically one of the highest expenses a city incurs. that use digital technologies in a secure fashion to manage the With connected public services such as smart parking and municipality’s assets, enhance waste management being powered by the HPE Universal IoT sustainable economic development, Platform and HPE analytics portfolio, smart cities are discovering reduce costs and resource new and innovative ways to drive greater quality of life, consumption, and support the well- economic growth, and sustainable communities. being of its citizens. Smart cities To learn more about how the HPE Universal IoT Platform have become a global phenomenon, makes it possible to build for and capture new value from the and municipal leaders around the proliferation of connected devices, read the white paper “Smart world are interested in the potential cities and the Internet of Things” or download the waste opportunities as they prepare their management case study. cities for the future. If you are a HPE Partner Ready channel partner, you can Beyond marketing and accelerate your customers’ IoT success today through the HPE technology, an effective smart city IoT Partner Program, providing access to HPE’s Universal IoT strategy takes a city’s cultural, Platform and the broader portfolio of HPE IoT products, solutions socioeconomic, environmental, and and resources and the ability to leverage HPE’s global market geographical realities into account presence. and requires collaboration between

‘World Urbanization Prospects’, United Nations, 2014 stakeholders—from policy makers ²Urban world: Cities and the rise of the consuming class, McKinsey Global Institute, 2012 to citizens—with assistance from trusted, experienced information Nigel Upton is Worldwide Director & General Manager, IoT/GCP for and communication technology Communications & Media Solutions, Communications Solutions Business, (ICT) partners. Hewlett Packard Enterprise. Innovation and the proper Nigel’s responsibilities include architecture, development, implementation, marketing, sales and support of CMS IoT and GCP solutions. The platforms implementation of new technologies and vertical solutions target the needs of large Enterprise and Telco into a smart city strategy requires customers with global requirements for IoT solutions who seek a single careful contemplation. ICT partners supplier for all connectivity. Nigel’s leadership spans management of the IoT partner ecosystem for the IoT platform and for the GCP connectivity play a pivotal role in the project’s relationships globally. development and implementation, Regularly sought after by the media, Nigel has been interviewed by the and therefore its ultimate success. Wall Street Journal, Forbes, Bloomberg News, Computer Business Review, Hewlett Packard Enterprise helps ITweb.tv, Information Week, CXO Today, CMSWire, eWeek, Channel customers use technology to slash Partners Online, Computer Business Review and many others. He blogs and authors articles on key technology trends. Nigel has also given the time it takes to turn ideas into numerous keynotes for customer, industry and analyst events. value. [email protected]

SAVE THE DATE Noember 11 2017 Hyatt Regency San Francisco Airport Pre-Conference Seminar on November 12, 2017 201 NonStop Presentations are aailable now

29 30 Timothy Chou’s Ten-Year Plan for IT Career Longevity

Janice Reeder-Highleyman

31 32

“…if you are not constantly on

a 10-year path in the tech field, consider yourself at a disadvantage.

Continuing education is more true now than ever.”

year path in the tech field, consider yourself at a disadvantage. Continuing education is more true now than ever.” Who is this guy? For one thing, he is a man who has taken his own advice – always learning, never allowing his knowledge base to grow stale, ever” receptive to a new idea or opportunity. Fresh out of the University of Illinois with a doctorate in Electrical Engineering and a focus on VLSI design, Tim took fourteen interviews and was offered thirteen jobs. The majority were in hardware engineering, but Tim was concerned that the future would hold limited opportunities for computer architects. Instead, he went to work as a software engineer for Tandem Computers, one of the original Silicon Valley startups. im Chou (@timothychou) has a theory. This former Tandem programmer (first job out of Tandem pioneered fault-tolerant, highly scalable university) and former President of Oracle systems. Established in 1974, Tandem under company On Demand believes that to remain relevant founder and CEO Jimmy Treybig was known as much for Tnowadays – and employed – a person must adhere its unique corporate culture as it was for its ultra-reliable to a 10-year path – 2 years to learn a skill and 8 years technology. It was acquired by Compaq Computers to execute the skill. Then recycle yourself with a new in 1997 and became part of Hewlett-Packard in 2002. specialization, and start over. Today, Tandem innovation benefits users of HPE Integrity NonStop Servers. This, at least, is what Tim counsels his students at Stanford University, where he lectures on topics such We mentioned Jimmy Treybig. He was the one who as cloud computing. He tells those kids, “Don’t sit still.” hired Tim Chou at Tandem, and they have been crossing To seasoned professionals who ask his advice, he paths professionally and personally for decades. In one encourages them to go back and hit the books. Tim says testimonial to his friend and former employee, Jimmy that the ability to graduate from school and work for the wrote, “Over the years, Tim has brought a unique set of same IT company for 40 or 50 years rarely exists skills to every job he’s undertaken. …He’s always been at anymore. Simply put, “if you are not constantly on a 10- the cutting edge of technology.” 31 32 Exactly what are those other jobs, and how has Dr. and a host of recent startups rank as vanguards of Chou defined his own career plan for IT relevance? We technological innovation. That makes it tough in mature had the opportunity to interview Tim after his keynote businesses for those tasked with acquiring top-notch speech at last year’s NonStop Technical Boot Camp. His young talent. Several years ago, Tim spoke with the presentation, “Precision Planet,” focused on the Internet executive of a large financial services firm that was of Things and its immense potential to impact industry. struggling to attract qualified new hires. The executive IoT also is the topic of Tim’s latest book, Precision: lamented, “You can walk in our hallways today; and you Principles, Practices, and Solutions for the Internet of won’t see anyone younger than 40.” Things¹, which introduces readers to IoT basics. Included within Tim’s last lecture of every semester, he Precision is the basis of an online IoT course. Register at offers this example. “I tell my students that if I lived in a www.precisionstory.com/class. community of 100 people, we all would be generalists. After his tenure at Tandem, from which he departed We would make our own bread, we would fix our own as a director, Tim moved to Oracle and helped to develop cars, and so on. But in a world of five billion people, the Oracle 8 version of the Oracle database. He left value does not accrue to the generalist but instead to the after two years to become COO of Reasoning Inc., a specialist. The trick is in what you decide to specialize pioneering application service provider (ASP). Tim and how often you must reengage in the learning cycle parlayed that new concept of centralized computing to remain relevant.” to his next move. It was a huge leap - back to Oracle At this point in his life, it would be easy enough for Dr. and right into the cloud as President of Oracle On Chou to rest on his laurels. He’s over 40. We won’t say Demand. There he guided the company’s new Software by how much. Retired from Oracle, he travels the world as a Service (SaaS) delivery model from start-up to a as a sought-after speaker; continues to write; has been multibillion-dollar business. interviewed by publications such as Forbes, Business From hardware engineer to software developer to Week, and The Economist; and has appeared on CNBC, enterprise application-delivery specialist. During his NPR, and other media outlets. Curiously, however, Tim efforts to reinvent his own career path, Dr. Chou gained is still operating within his own guidelines - learn a skill, prominence as a cloud evangelist. In his 2004 book, execute the skill, then recycle yourself, and start over. The End of Software: Transforming Your Business for the Where has recycling taken him since Oracle? Like his On Demand Future², Tim makes the case for redefining former boss Jimmy Treybig, Tim has been involved with application hosting from in-house residence to in-cloud numerous startups - some as an investor, others in an residence. In 2015’s Cloud Computing: Fundamentals³, advisory capacity, all as a student of new technologies. One Tim uses real-world examples to highlight cloud example is WebEx Communications (now Cisco WebEx), a principles. SaaS provider of online conferencing services. Tim served Cloud Computing, like several earlier Tim Chou- on WebEx’s Technical Advisory Board and today is a board authored publications, is based on Tim’s lectures at member of the following public companies: Stanford University, where he has taught since 1982. Blackbaud, Inc. (NASDAQ: BLKB) creates software Back then, computer science wasn’t an undergraduate that helps nonprofit organizations with CRM, marketing major. Times clearly have changed. As a lecturer, Tim campaigns, fundraising activities, and analytics. introduced Stanford’s first course on cloud computing. Teradata Enterprise Solutions (NYSE: TDC) - Tim joined He also teaches in Beijing, China at Tsinghua University, this enterprise software company in February, 2017. a top engineering school that often has been referenced Terada data offers solutions in Data Warehousing, Data as China’s MIT. Every Chinese leader in the last three Analysis, and Big Data. decades has attended Tsinghua. In his “spare time,” Tim is the chairman, IoT, of Tim is as enthusiastic about his role in academia as Alchemist Accelerator (www.alchemistaccelerator.com), he was at his first job with Tandem Computers. “My a venture-backed business incubator that already has students are smart kids; and I do my best to keep up provided seed funding for over 80 enterprise startups. with them.” Like Tim when he left college, his students Alchemist’s backers are Cisco Systems, Draper Fisher gravitate toward opportunities they perceive as exciting Jurvetson, Foundation Capital, Mayfield, Khosla and more than just a paycheck. Back in the day, Tandem Ventures, Salesforce.com, Sapphire Ventures, Siemens, and Oracle were among what Tim says were the “cool Tyco, and US Venture Partners. places to work.” Now Google, Facebook, Amazon, 33 34 Via Alchemist, Dr. Chou became an advisor to UniquID they see artificial intelligence and similar technologies (www.uniquID.com), a startup that utilizes blockchain as high-growth career paths no matter the industry. We technology to create password-free security for peer-to- found online the resume of one Lecida intern, Stanford peer device authentications. More easily put, UniquID student Michael Xie⁵. This is how he defined his builds digital vaults for companies that engage in the internship responsibilities: Developed algorithms for Internet of Things. Think Blockchain as a Service (BaaS). time-series prediction, anomaly detection, and control Tim decided to remain an advisor but only until he optimization for high dimensional time series data with comfortably could explain the concept of blockchain to missing values for industrial Internet of Things. others. He hasn’t left yet. To that, this author says, “Huh?” Then there’s Lecida, Inc., “the brain for the industrial We assume that Tim Chou understands every bit of Internet.⁴” Tim is the Executive Chairman and co-founded that. After all, he believes that “all the stars are lined the company with one of his former Stanford University up for IoT;” and he is deeply immersed in its potential students. Its team of engineers come from Stanford impact. Of course, it doesn’t mean that something new and UC Berkeley with impressive expertise in machine won’t catch Tim’s attention in a year, maybe two years, learning, distributed systems, and cloud computing. but no more than ten for his own IT career longevity. Lecida delivers precision digital assistants for people Learn a skill, execute the skill, recycle yourself, and tasked with maintaining industrial machines and start over. optimizing their performance capabilities. Next! We as consumers understand the concept of personal digital assistants from our voice-over search engine experiences with our cellphones or other devices. “Siri, Janice Reeder-Highleyman married what time is it?” “Cortana, remind me to pick up my dry cleaning tomorrow.” Amazon’s Alexa and others take it into IT and has spent the last 30 further by providing home automation hubs that control years pretending to understand all several smart devices. “Alexa, turn off the lights at 11 pm.” the gobbledygook that technology Lecida’s digital assistants interface with combine geeks speak at her. Somehow, she harvesters, wind turbines, excavators, blood analyzers, has managed to cobble together quite etc., machines that form the backbones of critical a decent career as a bylined writer, infrastructures. Healthcare, construction, and renewable ghostwriter, researcher, and editor in a energy are just three of the industries that benefit from variety of industries. In her spare time, Lecida’s machine learning applications. More than Janice likes to scare herself and her Alexa performing an action, more than Siri providing students as a flight instructor. Contact an answer, Lecida’s digital assistants “help the world’s leading industrial companies learn from their machines.” Janice at +1 908-459-8363 and at The result? Increased operational efficiencies, real-time [email protected]. intermachine communications, autonomous activities, self-healing, and so much more. 1 Timothy Chou, PhD, Precision: Principles, Practices and The Lecida technology and that of its competitors is so transformative that the staff meet weekly to share Solutions for the Internet of Things (Lulu.com, 2016), the latest literature and to review papers presented http://amzn.to/2n1eVxa by their engineers. Topics such as clockwork recurrent 2 Timothy Chou, The End of Software: Transforming Your Business neural networks fascinate the team and totally baffle Tim for the On Demand Future (Sams Publishing, 2004), Chou. Despite his learn a skill, execute the skill, recycle http://amzn.to/2mNcrjK yourself, and start over philosophy, Tim admits that with 3 this new career path, he’s struggling to keep up. Timothy Chou, Cloud Computing: Fundamentals (Lulu.com, 2015), Who isn’t struggling or at least not quite as much are http://amzn.to/2n3IGgM the students Lecida attracts to its internship programs. They are the young adults about to enter the workforce 4 Lecida, Inc. www.lecida.com

but who are not interested in pursuing the more popular 5 https://stanford.edu/~sxie/michael_xie_resume.pdf social media route. Machine learning excites them, and 33 34 eCube Systems Interviews VSI’s Brett Cameron on OpenVMS, Open Source and Kevin Barnes Developer tools Managing Partner Operations Professional Services

eCube Systems is interviewing Brett Cameron, But probably my favorite thing to work on would be Director of Applications & Open Source Services at integration – helping customers to integrate their VMS Software, Inc. Brett holds a doctorate in Chemical “legacy” OpenVMS systems with other systems. I Physics from University of Canterbury and was a long don’t know why, but I have always enjoyed playing time employee of Hewlett Packard Corporation where around with integration software and crafting novel he held various positions for 19 years and most recently integration solutions. Many organizations seem to a Senior Architect in the Cloud Services group. Brett is think that their old OpenVMS systems are some sort well known in the OpenVMS community and has made of black box that is unable to communicate with a hobby of porting Open Source tools to OpenVMS, the rest of the world. Possibly they have lost the including AMQP, RabbitMQ and Erlang. skills to do this sort of work, or maybe they simply eCube: Thanks for giving us some of your valuable do not know what is possible; however the simple time for this interview. You have been developing fact of the matter is that there are a myriad of good on OpenVMS for a long time and are the go-to guy integration options available to OpenVMS users, and when people want high performance and customized it is invariably possible to craft a good integration software on OpenVMS. There are so many things you solution that will allow them to integrate their trusted have worked on in OpenVMS, so there is a lot of ground OpenVMS-based application environment with the to cover. What is your favorite thing to work on? wider computing ecosystem, and more often than not it is possible to do this using Open Source Answer: The tough questions first! In terms of your technologies, particularly these days, with more high- comment about me having so many interests, this quality Open Source solutions being available. has always been a problem! I will be working on one thing and will then see something else that looks I would also add to this that I enjoy working closely interesting, and before you know it I’m pushed for with customers as opposed to always working time to complete whatever it is that I should be doing away in a back room somewhere. Our business is in the first place. Maybe there’s some support group I a symbiotic relationship with our customers, and can join. Probably for as long as I have been involved interacting directly with customers is an important with software development (dating back to the late part of that relationship. Aside from the work aspect, 1980’s) I have had a keen interest in Open Source I enjoy meeting new people and making new software and the potential it provides, and I can recall friends, and it is often fascinating to learn about the installing early versions of Linux from some 20 3.5” customers’ business – you kind of learn how bits of floppy disks, and if you messed up something on the the world work. last disk it was back to square one. Fun times. Even eCube: Yes, that is something we both share. As around this time (early 1990’s) I can recall porting you may know from our previous interviews with Sue small pieces of Open Source code to OpenVMS, Skonetski and Eddie Orcutt of VSI, we are trying to get initially to help with aspects of my PhD research, all the perspectives on the future of OpenVMS, now that and subsequently for customer-related projects VSI has taken charge. Can you tell me about the things when I started work in 1992 with Digital Equipment you have been asked to do? What are you working on Corporation as a FORTRAN programmer (who also and what progress you are seeing? happened to know a bit of C and Pascal).

35 36 35 36 Answer: My main focus is around Open Source, Subversion client, although both of these items figuring out what Open Source products would be need further work. I’ve also managed to fit in a bit good to have on OpenVMS and figuring out how to get of consulting here and there, so all up it has been them there, which may involve us doing the work, or a busy year, and I don’t think things will be much possibly working in conjunction with the community. different next year! There has been a lot of good work done over the eCube: Since you are the first VSI person we have years around Open Source on OpenVMS, and we want spoken to with a focus on developers, and VSI has to expand on that. From my perspective, whatever we said that the future of OpenVMS depends on the do needs to be relevant to our customers. I suppose developers, what do you think it take to get developers that is in some ways an obvious statement; however on OpenVMS ready for the future? finding out exactly what is relevant and determining Answer: There’s no short answer to that question, where we are going to expend our energy is not I suppose in part because everyone’s needs are necessarily straightforward, and this comes back to somewhat different. However, if I am to look at things my comments in the previous question about working from the perspective of getting new or younger closely with customers and partners. developers onto OpenVMS, clearly we need to be Since taking over OpenVMS so to speak, we have able to provide the sorts of environments and tools obviously had to spend a good deal of time doing that they are used to using on other platforms. somewhat tedious things such as re-branding, getting For example, further enhancing GNV will make it environments and processes and procedures set easier for developers familiar with Linux to work on up, and so on; however we are largely through this OpenVMS, and providing powerful IDE’s such Eclipse now and going forward it will definitely be all about is also vitally important, as indeed are more Open innovation, adding new features, creating new Source solutions, and the ability to hook into facilities products, making significant enhancements, and such as GitHub and continuous integration tools such so forth. Clearly the x86 port is of highest priority; as Jenkins. however there is plenty more going on across the To some degree, modern developers don’t much care board, including the new TCP/IP stack, Java 8, and what the underlying operating system is, so long as various other significant projects. they have the tools to do their job and those tools Personally, since around late March this year (2016) work well. While we do have available some good I have been largely focused on the Java 8 port. tools in this space (such as your NXTware Remote Thankfully I have had Camiel helping me, and I am and Eclipse-based IDE), there is still a lot that needs pleased to say that we are just about across the line to be done. with this project. I will not bore you with the statistics, I should also add that in parallel we need to continue but let’s just say that it has been a significant piece to support and enhance existing toolsets, as these of work and while we have certainly had plenty of are critical to many of our customers. One big item challenges, things are looking good. I should note that comes to mind here would be bringing the C++ that this work has been done in collaboration with compiler up to current standards. HPE, and certainly it would not have been possible without their assistance. eCube: We have heard a few comments on the Piggy backing off the Java 8 work we are looking OpenVMS events like the Technical Update Days and at upgrading some of the Java-based products and the Boot Camp have the primary focus on hardware potentially introducing a few new ones. For example, and operating systems topics. Developers say they we have beta kits for Scala (a popular functional don’t want to attend because there is no focus on language that uses the JVM) and Maven (a powerful developer issues. Is this a fair assessment? If not, build tool for Java projects). what can be done to change this perception? If so, will VSI change its focus to a more developer In addition to the Java work, I’ve been working on oriented event? Are there any plans to change? various other projects, including the new version Answer: I am not so sure that this is an entirely fair of CSWS (Apache), a new Ruby port with a pile of assessment. Certainly some of the material presented interesting extensions, a new version of PHP, and at these events would not be of much interest to various other such projects. We also have a partial developers (it’s not of much interest to me however) git implementation and a reasonably functional

37 38 I would like to think that at Boot Camp in particular month field test, and we have been very pleased with there should be more than enough of interest to the results: bugs were found (and fixed), and testing everybody (the committee do a great job of selecting coverage has been comprehensive. The intention is a nice balance of presentations). But I do appreciate to release in Q1 2017, and this is looking achievable the problem. As to what can be done to address (there certainly should be no technical impediments). this matter, I am not sure. I think it can also be that Java is clearly an important language, and we are developers will sometimes simply miss out on getting factoring future Java ports into our planning. We to attend these events, which is one thing. Another will for example need to repeat the Java 8 port for thing is that there is a lot of diversity to consider here, x86, and will also need to start looking at Java 9 I and what might be highly relevant to developers from suppose. It is also important to appreciate that there one organization may be of absolutely no relevance are now quite some number of other languages that to developers from another organization. I have done use the JVM (Java Virtual Machine). I mentioned many talks over the years around reasonably general Scala previously, and Clojure would be another one topics such as web services and integration, and that comes to mind. With Java 8 we are able to better possibly we could look to expand on this, but it is not support some of these other languages, which gives really possible to go into much detail at a conference. developers more options on OpenVMS, and makes it If developers are interested in a particular topic, possible to port Open Source projects written in such we could certainly see about facilitating custom languages across to OpenVMS (often without too workshops or training sessions. Another thing that much difficulty). comes to mind might be to organize a larger number However, there are a number of other new languages of smaller events. Such events do not necessarily that we also need to think about. For example, need to be formal events organized by VSI; they languages such as Rust and Google’s Go language could just be meetups arranged by OpenVMS users are becoming more popular and widely used, and it to share their experiences with others (standard would be great to have these available on OpenVMS. meetup practice being to provide pizza and beer). Interestingly these languages leverage the Open It would be great if we (VSI) could get along to all Source LLVM compiler backend, which we are porting such events, but practically this could be a challenge; to OpenVMS as part of the x86 work, so in theory it however I’ve been to meetups where speakers will would be possible to do something with these other call in via Skype, Hangouts, or whatever. Webinars languages; however this would not be a small job and are another possibility. A few years back my good it is just an idea at this stage. friend John Apps and I did a series of OpenVMS development-related webinars that were well- Scripting languages such as Ruby, Lua, and Python received, and feedback from these sorts of events are also very important. I mentioned previously that can be used to better guide future activities. We did we have a new Ruby port available, and we are the talks at two different times to cover most time actively considering exactly what else we want to do zones; it worked very well, although I didn’t much in this space. We also have a version of Lua available. enjoy getting out of bed at 2am or 3am in the middle Something like Node.js would also be nice; however of winter! this is another thing that we might want to hold off on until OpenVMS is up and running on x86 (for various eCube: Well, I just figured you never slept very much! reasons). I have also talked a lot about Erlang in the Continuing with development topics, the programming past, and we might look to formalize some of this language support on OpenVMS is one of its strengths, work. As things stand we have a couple of working because it supports so many different languages. You ports of slightly older versions of Erlang; however I mentioned your work on Java 8 – it is expected to have would like to see us have available a more current current version support on OpenVMS in Q1 2017. Is that release. The older versions are reasonably stable and on schedule? How important do you think this is and functional; however there are one or two limitations where do you see new Java ports in the priority list for that I’d like address before I would consider them to VSI? be fit for use in a production environment. Answer: I’ve talked a little about the Java 8 port already. This has been (still is) a major project, and eCube: What are the development features of we are generally very happy with how it has gone. OpenVMS that make it a good operating system for As of this moment we are coming to the end of a two developers? How important are tools like SCA and 37 38 PCA for developers these tools are absent from newer eCube: I agree; unless you can offer them something OSes like Linux. What can be done to attract young they can’t get elsewhere. Moving on, there are a lot of developers to the virtues of OpenVMS development? new technologies that have developed since the 70’s Answer: All operating systems have their good and and 80’s, like 4GLs, client/server architecture, object bad points. OpenVMS was designed by engineers for oriented languages and web-services. How is OpenVMS engineers, and this resulted in good 3GL language adapting to these technologies? support, a very comprehensive set of library functions Answer: I’m not sure that there’s a short or easy and system services, and various developer tools answer to that question. If we go back in time, I think such as those you’ve mentioned that all work it could be argued that for a period (maybe the mid to together seamlessly. Most OpenVMS developers are late 80’s and possibly into the early 90’s) OpenVMS familiar with the RTL LIB$ routines; however there was easily at the forefront of operating system are a whole load of other useful routines in the RTL, technology, and it was one of the first platforms that many of which (such as the parallel processing PPL the sorts of technologies you refer to were made library) seem to have been somewhat forgotten available on. Times change and large corporations about. Other operating systems also have such do strange things; new trendy looking kids arrive on functionality, although it may be a separate library the block; fashion changes. However, through all of that needs to be installed, as opposed to something this change and in spite of going through two major that comes bundled with the operating system. The acquisitions (Compaq and HP), OpenVMS has in one key point is that OpenVMS was designed with all of way or another for the most part managed to adapt these sorts of things in mind, as opposed to evolving to these changes in fashion. It is possibly only in the (in a somewhat ad-hoc fashion) to accommodate last decade where things have slipped somewhat; them, and accordingly things seem (to me anyway) however this is a recoverable situation, and we are somewhat more logical on OpenVMS – it’s like it working on making that recovery happen, with help all goes together logically, because it does – it was and support from the community. designed that way. If I wanted to cite a few examples relating to But getting back to your question about what can be your original question, I have fond memories of done to draw attention to the virtues of OpenVMS implementing DCE-based client-server solutions for development, I am really not sure. I don’t think several customers back in the mid 1990’s. At that time that you are ever going to convince a staunch DEC had the best DCE implementation, and for that Linux developer or a staunch Windows developer matter OpenVMS arguably also had the best CORBA that OpenVMS has better facilities; it is simply an implementation. CORBA is still in fairly common argument that (most of the time) you are not going usage, but to some extent neither CORBA nor DCE to win; people like what they like, and you’re just not ever really achieved their perceived potential and going to shift them; you start entering religious war lost out to the next wave of fashion, which for the territory. Obviously some people will be more open most part centered around web-based applications to the matter than others, but in general I think this and leveraging HTTP in all manner of strange ways, is probably the wrong approach (initially at least). I leading up to the advent of web services and the suppose that ultimately it comes back to the comment myriad of web services standards surrounding I made previously about ensuring that we have the them. Implementing web services-based solutions sorts of tools available on OpenVMS that “modern” on OpenVMS is not particularly problematical and developers are used to using on other platforms several good solutions exist; however one common (irrespective of the relative merits of such tools), and problem that I have encountered many times is ensuring that those tools work well. Once people are that OpenVMS users will have business-critical using the platform, it is easier to have a discussion applications written in languages other than C or Java with them about all of this other goodness. that they do not necessarily know how to integrate The other side of things I suppose is giving existing with the likes of C and Java, and this will often be an OpenVMS developers more that they can use; there impediment to progress, or will result in some quite is a lot that we can do with the existing development fascinating (and unnecessarily complicated, and often stack, both ourselves and working with partners. very brittle) workaround solutions.

39 40 Prepare for a world where everything computes

The next wave of market leaders will possess the vision and technological agility to turn ideas into reality faster than the competition. Accelerate your digital transformation.

Register now

39 40 I am not sure that I’ve adequately answered the I have used successfully time and time again on question, but I think the bottom line is that to some multi-million dollar projects would be flex and bison degree OpenVMS has managed to adapt to changes (essentially lex and yacc). I have used these tools to in fashion and it’s our job to accelerate this. create grammars for custom RPC-style middleware eCube: I want to keep this focused on software solutions, and I have used them to develop parsers development, but there is an important aspect that for language conversion projects. I would not claim to hardware plays in the future. What do you see as the be an expert with these things by any means, but all major change which will occur when OpenVMS is of the projects in question were successful (and quite supported on the X86/64 platform? a lot of fun). Answer: I suppose the obvious answer is that we’ll eCube: What about modernization tools? Do you be able to run OpenVMS on a much wider range use a modern IDE when you develop? Does that help of hardware! Somewhat less obvious perhaps is OpenVMS grow in the future? that it also opens up the potential to more readily Answer: I fully appreciate the benefits of a modern port some interesting Open Source products to IDE, and I believe that a modern IDE is essential for OpenVMS. For example, I mentioned previously how software development on OpenVMS, particularly if languages such as Rust and Go leverage LLVM. I we want to attract younger developers, who have think that I also mentioned how the V8 JavaScript grown up with such things. Aside from just looking engine used by Node.js makes extensive use of just nice, IDE’s provide many other features that are in time compilation (JIT) in pretty much the same way expected (taken for granted) by younger developers, as Java does. Implementing a just in time compiler such as integration with source code control systems, for V8 on Itanium would not be a trivial exercise; integrated debugging facilities, hooks into continuous however x86 is supported. In short, I think it is fair integration tools, unit testing facilities, and so to say that x86 provides more options and opens up forth. Seamless cross-platform development is also an interesting array of opportunities to bring some anything key aspect here. new technologies to the OpenVMS platform. It will eCube: What are the obstacles that VMS Software also make us start thinking about a few things. For faces down the road and how can that be resolved? example, people running OpenVMS on their x86 Answer: It is always difficult to say what the future laptops are probably going to want a decent GUI, and might hold, but there are certainly several things we therefore need to look at improving the current that we need to be cognizant of and put in place state of play in this space. strategies to address. As is well known, the OpenVMS Virtualization is probably also going to be a big one, business prior to the advent of VSI had for various and I can see many OpenVMS users being interested reasons been in steady decline, with users moving in the notion of running OpenVMS as a guest to alternative platforms and so forth. Some of the operating system in their corporate clouds. This in reasons for this were not necessarily related to turn could have some interesting ramifications from anything that HP or Compaq or DEC had or had not a licensing perspective, and there are a few other done, but where down things like corporate initiatives things that we will need to consider, such as how to to standardize (or try to standardize) on a particular deal with shared storage and how OpenVMS might and more widely used operating environment. In need to interact with core cloud services such as some cases skills had been lost, effectively forcing provisioning, and so forth. the need for change. In some cases OpenVMS with eCube: You have been involved with Open Source its reliability and stability just ended up being its own tools for many years. What are the most important tools worst enemy! From a VSI perspective, it is important in your mind? not only to look after the needs of existing loyal Answer: It really depends on the problem that you OpenVMS users but also to put in place mechanisms are trying to solve. For example, we’ve talked a lot that will ideally see increased use of the operating about integration and some of the Open Source system, and there are many ways by which this integration technologies and programming languages may be achieved, including training, marketing, that I have used and hold in considerable regard. introduction of new technologies (as discussed If I was to look at it from a software development previously), and so on. It may also entail working perspective, I would probably say that two little tools with partners to develop specific solutions in which 41 42 the OpenVMS system is essentially an appliance. It perspective than it has for quite some considerable should be noted that looking after existing OpenVMS time. We will continue to advance the operating users also entails many of these same activities. system, enhancing existing features, adding We are providing either directly or in conjunction new features, provide support for new software with partners a range of consulting services, which technologies, and potentially port it to other extend to include the likes of hosting and application architectures (not only x86). There’s no value in maintenance and support services. thinking small here, as if you think small you only eCube: What do you think is the key for OpenVMS ever achieve small. We have an opportunity to do for the future? big things with OpenVMS, and this is likely to involve taking it to places that might previously not have Answer: I think that I’ve covered most of this in my been even considered. We’re a crazy bunch! answers to some of the previous questions, but in general terms I believe it comes down to a few things. You might want to talk to our CEO Duane Harris Looking after existing OpenVMS users is paramount, about the plans VSI has for the future. He can tell you and this does not mean preserving the status quo; it that our goal is return OpenVMS to prominence as a means enhancing the operating system and layered leading operating system platform. products; introducing new technologies; providing Mr. Barnes, one of the founders of eCube Systems, has been in the software industry for over 40 years. A graduate of Texas training; regular information-sharing events; providing A&M with a Computing Science degree, Mr. Barnes has held a a range of services; and so on. We need to listen to broad array of positions: CTO, project manager, lead systems our customers and continue to provide them with programmer, systems manager, senior systems analyst, quality products and services. We also need to make distributed computing analyst and middleware architect. A veteran of Control Data, Cray and Borland Software, Mr. the platform more appealing and relevant to a wider Barnes has developed a wealth expertise in a variety of audience. With the plans we have and the team that different hardware/software platforms, including compiler we have in place, I am sure we are in good shape optimization on supercomputers, database and systems with all of these things. integration, Fortran, C and Cobol development, and distributed application development. Knowledgeable in a variety of legacy As our web site says, all we do is OpenVMS, and and current platforms, Mr. Barnes continues to leverage and the bottom line is that the operating system is enhance his many skills by leading the Professional Services Group at eCube Systems. now receiving more attention from an engineering

41 42 Protecting Sensitive Data In and Beyond the Data Lake

Carole Murphy GLOBAL PRODUCT MARKETING, HPE SECURITY – DATA SECURITY Carole Murphy currently manages product marketing for HPE Security – Data Security, where she is responsible for developing market strategy for the HPE SecureData product line and Big Data/Hadoop, and IoT solutions, including go-to-market planning, product communication, strategic positioning and market awareness.

The need to secure sensitive data in Hadoop and IoT ecosystems adoop is a unique architecture designed to enable organizations to gain new analytic insights Hand operational efficiencies through the use of multiple standard, low-cost, high-speed, parallel processing nodes operating on very large sets of data. The resulting flexibility, performance, and scalability are unprecedented. But data security was not the primary design goal. When used in an enterprise environment, the importance of security becomes paramount. Organizations must protect sensitive customer, partner and internal information and adhere to an ever-increasing set of compliance requirements. But by its nature, Hadoop poses many unique challenges to properly securing this environment, not least of which include automatic and complex replication of data across multiple nodes once entered into the HDFS data store. There are a number of traditional IT security controls that should be put in place as the basis for securing Hadoop, such as standard perimeter protection of the computing environment, and monitoring user and network activity with log management. But infrastructure protection by itself cannot prevent an organization from cyber-attacks and data breaches in even the most tightly controlled computing environments. Hadoop is a much more vulnerable target—too open to be able to fully protect. Further exacerbating the risk is that the aggregation of data in Hadoop makes for an even more alluring target for hackers and data thieves. Hadoop presents brand new challenges to data risk management: the potential concentration of vast amounts of sensitive corporate and personal data in a low-trust environment. New methods of data protection at zeta-byte scale are thus essential to mitigate these potentially huge Big Data exposures. 43 44 Data protection methodologies There are several traditional data de-identification approaches that can be deployed to improve security in the Hadoop environment, such as storage level encryption, traditional field-level encryption and data masking. However, each of these approaches has limitations. For example, with storage-level encryption the entire volume that the data set is stored in is encrypted at the disk volume level while “at rest” on the data store, which protects against unauthorized personnel who may have physically obtained the disk, from being able to read anything from it. This is a useful control in a Hadoop cluster or any large data store due to frequent disk repairs and swap-outs, but does nothing to protect the data from any and all access when the disk is running— which is all the time. Data masking is a useful technique for obfuscating sensitive data, most often used for creation of test and development data from live production information. However, masked data is intended to be irreversible, which limits its value for many analytic applications and post-processing requirements. Moreover, there is no guarantee that the specific masking transformation chosen for a specific sensitive data field fully obfuscates it from identification, particularly when correlated with other data in the Hadoop “data lake.” While all of these technologies potentially have a place in helping to secure data in Hadoop, none of them truly solves the problem nor meets the requirements of an end-to-end, data-centric solution. Data-centric security The obvious answer for true Hadoop security is to augment infrastructure controls with protecting the data itself. This data-centric security approach calls for de-identifying the data as close to its source as possible, transforming the sensitive data elements with usable, yet de-identified, equivalents that retain their format, behavior, and meaning. This protected form of the data can then be used in subsequent applications, analytic engines, data transfers and data stores, while being readily and securely re-identified for those specific applications and users that require it. For Hadoop, the best practice is to never allow sensitive information to reach the HDFS in its live and vulnerable form. De-identified data in Hadoop is protected data, and even in the event of a data breach, yields nothing of value, avoiding the penalties and costs such an event would otherwise have triggered.

The solution—HPE SecureData for Hadoop and IoT HPE SecureData for Hadoop and IoT provides maximum data protection with industry-standard, next generation HPE Format-preserving Encryption (FPE), (see NIST SP-800-38G) and HPE Secure Stateless Tokenization (SST) technologies. With HPE FPE and SST, protection is applied at the data field and sub-field level, preserves characteristics of the original data, including numbers, symbols, letters and numeric relationships such as date and salary ranges, and maintains referential integrity across distributed data sets so joined data tables continue to operate properly. HPE FPE and SST provide high-strength encryption and tokenization of data without altering the original data format. HPE SecureData encryption/tokenization protection can be applied at the source before it gets into Hadoop, or can be evoked during an ETL transfer to a landing zone, or from the Hadoop process transferring the data into HDFS. Once the secure data is in Hadoop, it can be used in its de-identified state for additional processing and analysis without further interaction with the HPE SecureData. Or the analytic programs running in Hadoop can access the clear text by utilizing the HPE SecureData high-speed decryption/de-tokenization interfaces with the appropriate level of authentication and authorization. If processed data needs to be exported to downstream analytics in the clear—such as into a data warehouse for traditional BI analysis—there are multiple options for re-identifying the data, either as it exits Hadoop using Hadoop tools or as it enters the downstream systems on those platforms.

43 44 To implement data-centric security requires installing the HPE SecureData infrastructure components and then interfacing with the appropriate applications and data flows. SDKs, APIs and command line tools enable encryption and tokenization to occur natively on the widest variety of platforms, including Linux®, mainframe and mid-range, and supports integration with a broad range of infrastructure components, including ETL, databases, and programs running in the Hadoop environment, and is available for any Hadoop distribution. HPE Security—Data Security has technology partnerships with Hortonworks, MapR, Cloudera and IBM, and certifications to run on each of these. HPE SecureData is integrated with the Teradata® Unified Data Architecture™ (UDA), and with the HPE Vertica Big Data Platform.

Rapid evolution requires future-proof investments Implementing data security can be a daunting process, especially in the rapidly evolving and constantly changing Hadoop space. It’s essential for long-term success and future-proofing investments, to apply technology via a framework that can adapt to the rapid changes ongoing in Hadoop environments. Unfortunately, implementations based on agents frequently face issues when new releases or new technology are introduced into the stack, and require updating the Hadoop instance multiple times. In contrast, HPE SecureData for Hadoop and IoT provides a framework that enables rapid integration into the newest technologies needed by the business. This capability enables rapid expansion and broad utilization for secure analytics.

Securing the Internet of Things Failure to protect sensitive data in the Hadoop environment holds major risk of data breach, leaking sensitive data to adversaries, and non-compliance with increasingly stringent data privacy regulations such as the General Data Protection Regulation (GDPR). Big Data use cases such as real- time analytics, centralized data acquisition and staging for other systems require that enterprises create a “data lake” — or a single location for the data assets. While IoT and big data analytics are driving new ways for organizations to improve efficiencies, identify new revenue streams, and innovate, they are also creating new attack vectors which make easy targets for attackers. This is where perimeter security is critical, but also increasingly insufficient – it takes, on average, over 200 days before a data breach is detected and fixed. As the number of IoT connected devices and sensors in the Enterprise multiplies, the amount of sensitive data and Personally Identifiable Information collected at the IoT Edge and moving into the back-end in the data center--is growing exponentially. The data generated from IoT is a valued commodity for adversaries, as it can contain sensitive information such as Personally Identifiable Information (PII), payment card information (PCI) or protected health information (PHI). For example, a breach of a connected blood pressure monitor’s readings alone may have no value to an attacker, but when paired with a patient’s name, it could become identity theft and a violation of (HIPAA) regulations. IoT is here to stay. A recent Forbes article predicted that we will see 50 billion interconnected devices within the next 5-10 years. Because a multitude of companies will be deploying and using IoT technologies to a great extent in the near future, security professionals will need to get ahead of the challenge of protecting massive amounts of IoT data. And, with this deluge of sensitive IoT data, Enterprises will need to act quickly to adopt new security methodologies and best practices in order to enable their Big Data projects and IoT initiatives.

45 46 New threats call for new solutions - NiFi Integration A new approach is required, focused on protecting the IoT data as close to the source as possible. As with other data sources, sensitive streaming information from connected devices and sensors can be protected with HPE FPE to secure sensitive data from both insider risk and external attack, while the values in the data maintain usability for analysis. However, Apache NiFi, a recent technology innovation, is enabling IoT to deliver on its potential for a more connected world. Apache NiFi is an open source platform that enables security and risk architects, as well as business users, to graphically design and easily manage data flows in their IoT or back-end environments. HPE SecureData for Hadoop and IoT is designed to easily secure sensitive information that is generated and transmitted across Internet of Things (IoT) environments, with HPE Format-preserving Encryption (FPE). The solution features the industry’s first-to-market Apache™ NiFi™ integration with NIST standardized and FIPS compliant format-preserving encryption technology to protect IoT data at rest, in transit and in use.

The HPE SecureData NiFi integration enables organizations to incorporate data security into their IoT strategies by allowing them to more easily manage sensitive data flows and insert encryption closer to the intelligent edge. This capability is included in the HPE SecureData for Hadoop and IoT product. In addition, it is certified for interoperability with Hortonworks DataFlow (HDF). With this industry first, the HPE SecureData for Hadoop and IoT solution now extends data-centric protection, enabling organizations to encrypt data closer to the intelligent edge before it moves into the back-end Hadoop Big Data environment, while maintaining the original format for processing and enabling secure Big Data analytics.

45 46 A Large Financial Institution Migrates Datacenters with No Downtime Using HPE Shadowbase ZDM Keith B. Evans >> Shadowbase Product Management >> Gravic, Inc. Paul J. Holenstein >> Executive Vice President >> Gravic, Inc.

HPE Shadowbase Zero Downtime Migration (ZDM) Step 2: Load the existing source application database onto the Overview new (target) system (Figure 2).

How is HPE Shadowbase ZDM able to migrate systems with no application downtime? An overview of the ZDM process is shown in Figure 1 through Figure 4. (Note that these figures do not reflect the subject company’s actual system configuration, but demonstrate the general principles involved.) Step 1: Set up the new system (Figure 1).

Figure 2 – HPE Shadowbase ZDM Step 2

Step 3: Once the load is complete, it is good practice to validate that the target database is an exact replica of the source database before proceeding. (The HPE Shadowbase Compare product may be used to perform this verification.) Uni-directional data replication is enabled so that the database of the new system will remain synchronized with the current system while the current system continues to process Figure 1 – HPE Shadowbase ZDM Step 1 transactions. Step 3 allows the testers to work on a current copy of the production database to verify proper functioning with ‘real’ data. 47 48 Step 4: Test the new system (Figure 3). Migration of users to the new system is therefore accomplished with no application downtime. Furthermore, the normal risks of migration are eliminated. Users are moved to a known properly functioning system, and the existing, unchanged system is always available, providing service. If there is a problem, users can quickly be returned to the original system until the problem is corrected. And, with bi- directional replication being deployed, any data modified on the new system is reverse-replicated to the original system so that no data will be lost if a failback occurs.¹ In summary, even the most challenging of migrations (hardware, software, and location) can be undertaken efficiently and with low risk using HPE Shadowbase ZDM technology and methodology. The Company’s Migration Objectives The company’s original active/active system included a twelve-processor, four-core NonStop NB54000 system, DC1 (in Datacenter 1), and a similar system, DC2 (in Datacenter 2). The two systems were interconnected via the company’s existing bi-directional data replication engine, called “Old Bi- Dir Replication,” as shown in Figure 5.

Figure 3 – HPE Shadowbase ZDM Step 3

Finally, when the testing of the new system is complete, bi-directional replication is enabled in Step 4 and all users are moved to the new system (Figure 4). Using bi-directional replication allows the original system’s database to remain current when first using the new system, which greatly speeds a failback to the original system without losing any data. The company wanted to accomplish several objectives with Step 4: Enable bi-directional replication and move users to the its migration: new system. Figure 5 – The Company's Original Active/Active System

1. Retire DC1, a twelve-processor four-core NonStop NB54000 system, located in Datacenter 1, replacing it with a new twelve- processor four-core NonStop NB56000 system, called DC3, installed at a new datacenter location (Datacenter 3). 2. Move the DC2 system from Datacenter 2 to a new datacenter location (Datacenter 4), and rename the system DC4. 3. Replace its current data replication product (“Old Bi-Dir Replication”) with a HPE Shadowbase active/active solution. (Its original data replication product was expensive, functionally stable and not being enhanced.) 4. Train its entire systems and operations center staff on HPE Shadowbase using a hands-on approach while the project occurred. 5. Achieve all of these goals with no application outage and no decrease in business continuity protection during the entire process. At all times, there must be at least two copies of the application and database available on two separate nodes. 6. Complete the project in a few short weeks as leases were expiring. Figure 6 shows the company’s final active/active system configuration:

Figure 4 – HPE Shadowbase ZDM Step 4

After an appropriate period of time passes (to ensure the new system is properly functioning), the original system can be taken down (perhaps for its own upgrade).

Figure 6 – The Company’s Final Active/Active System 47 48

1 For a more detailed description of HPE Shadowbase ZDM, please see the Gravic white paper, Using HPE Shadowbase to Eliminate Planned Downtime via Zero Downtime Migration. All of these objectives were met with the use of HPE Once the migration was completed, the DC2 system no Shadowbase ZDM. The migration proceeded as follows: longer had any users attached to it. At this point, the old Step 1: Install and Load the DC3 System data replication product that had kept the DC2 database Step 2: Migrate Users from DC2 to DC3 synchronized with the DC1 database was decommissioned, Step 3: Shutdown and Move the DC2 System to Datacenter 4 and bi-directional replication between DC3 and DC2 was (Rename it to DC4) established using HPE Shadowbase (Figure 9). During this Step 4: Migrate Users from DC1 to DC4 step, the Shadowbase configuration was verified before Step 5: Retire the DC1 System moving the DC2 system. The following sections drill down into the details of each step.

Step 1: Install and Load the DC3 System in Datacenter 3

The company’s first step was to obtain its new NonStop NB56000 (DC3) and install it in Datacenter 3 (Figure 7). The company timestamped the Audit Trail in the DC1 system and loaded the DC3 database from the DC1 database with data up to the timestamp. The company next enabled HPE Shadowbase bidirectional replication between DC1 and DC3 and began replication to DC3 from the DC1 timestamp, keeping the DC3 database synchronized with the DC1 database. The DC3 system was then exhaustively tested and verified before being put into service.

Figure 9 – Migrate Users from DC2 to DC3

Step 3: Shutdown and Move the DC2 System to Datacenter 4 (Rename it to DC4) Replication between the DC3 and DC2 systems continued until the DC2 system move was ready, and then was paused with a timestamp in order to create a replication restart point. The DC2 system was then shut down and moved to Datacenter 4 (Figure 10).

Figure 7 – Add the DC3 Node

Step 2: Migrate Users from DC2 to DC3, Isolating DC2 for the Shutdown/Move When the test of the DC3 system was successfully completed, the migration of users from the DC2 system to the DC3 system began (Figure 8). Figure 8 – Migrate Users from DC2 to DC3

Figure 10 – Shutdown, Move, and Rename DC2 to DC4

The previously validated HPE Shadowbase data replication configuration was then deployed between the DC3 system and the DC4 (previously DC2) system (Figure 11) The DC3 queue of database changes that had built during the move was then flushed to the DC4 system by the HPE Shadowbase replication engine from its timestamp replication restart point, thereby making the DC4 database current. The two systems were continuously kept synchronized via the HPE Shadowbase data replication engine.

49 50 Summary

This migration presented a number of challenging parallel requirements: • Complete within a few weeks • No application downtime • Full business continuity protection maintenance at all times (meaning at least two systems continuously available in the event that one system failed) • Physical replacement (upgrade) of one hardware system • Replacement of the existing data replication software product with a new product (HPE Shadowbase) • Relocation of all systems and networks to new datacenters in other cities • Minimal risk, only proceed to the next step once the current step is tested and proven • Easily accomplish failback of each step if necessary

These requirements were undeniably challenging, but Figure 11 – Synchronize DC4 with DC3 the fact that they all were met is testament to the power of the HPE Shadowbase ZDM approach, which allowed for Step 4: Migrate Users from DC1 to DC4 to Isolate the the migration of systems with no application downtime. The DC1 System company used HPE Shadowbase ZDM to change its data Next, the users served by the DC1 system were incrementally replication software product, to move its active/active system moved to the DC4 system (Figure 12). to two new cities, and to perform a system upgrade in the process. During all of these activities, the company suffered no application downtime.

Keith B. Evans works on Shadowbase business development and product management for Shadowbase synchronous replication products, a significant and unique differentiating technology. Asynchronous data replication suffers from certain limitations such as data loss when outages occur, and data collisions in an active/active architecture. Synchronous replication removes these limitations, resulting in zero data loss when outages occur, and no possibility of data collisions in an active/active environment. Shadowbase synchronous replication can therefore be used for the most demanding of mission-critical applications, where the costs associated with any amount of downtime or lost data cannot be tolerated. For more information and the availability of Shadowbase synchronous replication, please email sbproductmanagement@ gravic.com.

Figure 12 – Migrate Users from DC1 to DC4 Paul J. Holenstein is Executive Vice President, Gravic, Inc. He has direct responsibility for the Gravic, Inc. Shadowbase Step 5: Retire the DC1 System to Create the Final DC3/ Products Group and is a Senior Fellow at Gravic Labs, the DC4 Active/Active Solution company’s intellectual property group. He has previously held various positions in technology consulting companies, After all the users were migrated to the DC4 system, no from software engineer through technical management to users were left on the DC1 system. So, the HPE Shadowbase business development, beginning his career as a Tandem (HPE data replication engine connecting the DC1 and DC3 systems NonStop) developer in 1980. His technical areas of expertise was decommissioned, and the DC1 system was retired, include high availability designs and architectures, data resulting in the final configuration (Figure 13). replication technologies, heterogeneous application and data integration, and communications and performance analysis. Mr. Holenstein holds many patents in the field of data replication and synchronization, writes extensively on high and continuous availability topics, and co-authored Breaking the Availability Barrier, a three-volume book series. He received his BSCE from Bucknell University, a MSCS from Villanova University, and is an HPE Master Accredited Systems Engineer (MASE). To contact the author, please email: SBProductManagement@gravic. com . Hewlett Packard Enterprise directly sells and supports HPE Shadowbase Solutions (www.ShadowbaseSoftware.com); please contact your local HPE account team. Figure 13 – Retire the DC1 System, Final Configuration

49 50 Tiered and Policy Based Data Backup- The new Paradigm Glenn Garrahan

Aside from personnel, Unfortunately the approach in too many circumstances the most irreplaceable is to “just make a copy” and mission accomplished! No asset of any business real consideration is given to the criticality of the data, organization is data. Data appropriate backup locations for various data pools, is both the fuel – and the product – of the automated required data retention periods, the cost per TB of backed- systems that support virtually all business processes up data, and backup and restore windows. With the vast today. Securing, protecting and ensuring immediate amount of data being backed up every second, this “one- access to data in an efficient cost effective manner size-fits-all” paradigm needs to change. are the most important responsibilities assigned to contemporary business information technologists. The Tiered Policy-Based Approach The traditional approach to data backup is to develop Think of a tiered policy-based approach like this- no and implement a strategy that typically leverages one one would rent a room in the Four Seasons Hotel to or more techniques intended to safeguard data against store old magazines, holiday ornaments or Grandma’s corruption, loss, unauthorized disclosure, or denial of used furniture! Conversely, no one would be foolish access. Ideally, protection techniques are assigned to enough to rent a locker at the local bus station to specific data sets (or pools) supporting specific business store priceless first state Rembrandt etchings either! processes following careful analysis of the relative A thoughtful individual will determine the appropriate criticality of, and requirements associated with, various storage location based on the rarity, value, replacement data pools. cost and condition of the items being retained. So why By itself, data is just an anonymous set of 1’s and 0’s; should business data be any different? It shouldn’t! data inherits its importance from the business process that it supports. In business critical environments, companies invest significantly in systems that provide Step 1: high-availability components, total redundancy of Intelligent Backup Data Analysis hardware and software, and real time replication It’s a given- all data is not the same. Data derives and backup to prevent data loss. However, these its importance – its criticality to the business and its precautions may not preclude a serious failure. priority for access following an interruption event – Recovery may not seem to be business critical, until from the business process it supports. That’s why data actual data loss occurs. management planning must be preceded by the due

51 52 diligence of business process analysis. Determination services targeted at multiple data sets. of which business processes are mission critical and An analytical effort will be necessary to determine which are merely important is mandatory. A physical the recovery priorities of business processes (with their mapping of business processes to their data (e.g., data associated infrastructure and data), and it is obvious produced and used by that process) must be performed, that this effort must be undertaken prior to developing and subsequently, of data to its actual storage location any data protection strategy. A business impact within the infrastructure. analysis should be performed to identify the current This is the crux of data management, and it needs to infrastructure and data associated with a given business process, and the impact of an interruption in the services be done for several reasons: and access to data associated with that process. 1. All data isn’t the same in terms of criticality The results of the impact analysis will naturally drive or priority to restore, there’s no one-size-fits- the setting of retention and recovery objectives that all data retention and protection strategy. define the criticality and restoration priority of the In most organizations, data retention and subject process and its “time-to-data” requirement protection involves a mixture of services used in (sometimes called a recovery time objective). This in appropriate combinations to meet the recovery turn provides guidance for the definition of a strategy requirements of different business processes that applies policy-based data backup and recovery and its data. services and techniques to facilitate renewed access to 2. The diversity of threats to data – including data within the required timeframe and in a manner that bit errors occurring close to the data itself, a fits budgetary realities and minimizes overall costs. storage hardware failure, or a power outage While this analysis will require commitment of usually with a broad geographical footprint, occurring scarce resources, the results should answer the outside the storage device– might require three questions that must be addressed by any data different protective services. “Defense in depth” protection plan: is a term often associated with this reimagining of policy-based data protection. 1. What data needs to be protected? 3. The practical issue of cost is a huge factor: 2. Where is that data currently archived? one-size-fits-all strategies for data management 3. And finally, what is the best way to protect the are generally not very cost effective. Backing data? up mission-critical data to tape might create Of course, there is truth to the assertion that problems with the recovery requirements of protecting data is a simple matter: make a copy of the always-on applications, because data restore data and move the copy off-site so it is not consumed might exceed the allowable backup window. by a disaster that befalls the original. Almost every data Conversely, using an expensive dedup device to protection strategy provides protection as a function of replicate the data of a low-priority application or redundancy, as replacing data following a disaster event business process would be overkill and a waste is not a feasible strategy. of money. While there is universal agreement on this concept, So, getting more granular in the assignment of different vendors seek to promote different technologies management, protection and recovery services to data for making the safety copies of the data, each promoting assets – creating policies for tiered data sets based on their wares as the one true path to data continuity. We a thorough analysis of requirements – is the best way to find ourselves locked in a perpetual battle over what is rationalize expense while streamlining and optimizing the best technology for the protection of our data. data backup. One possible approach would be to centralize service delivery using storage virtualization, which abstracts Step 2: Building an Integrated Data value-add services (like mirroring, replication, snapshots, Management Strategy etc.) away from storage arrays and centralizes them in the storage virtualization software “uber-controller.” Once the business needs of data are understood But to the best of our knowledge no universal storage and mapped, the challenge is building and maintaining virtualization product works with the full range of a data management strategy that includes multiple

51 52 enterprise host servers, local/remote storage sub- effective location based on the criticality of the data to systems or Cloud Object storage. its associated business process. In an ideal, service- In the absence of such a centralized strategy, another oriented, data protection scheme, data from specific location is needed where data tends to aggregate, business processes/applications are assigned retention hence providing the opportunity to apply tiered policy policies drawn from a menu of available services, based data management services. One idea is to delivered via a variety of hardware and software employ a Virtual Tape Library (VTL), which has come into tools, all in a highly manageable way. This concept is widespread use over the past decade. illustrated below. VTL technology evolved from a location where backup Step 3: Solution Selection jobs were stacked until sufficient data existed to fully This cutting edge approach, in which a “storage fill a tape cartridge (first generation) to an appliance director” serves as the cornerstone of efficient, tiered offering virtual tape devices to expedite backup (second policy-based data management, is conceptually simple, generation). Modern VTLs have advanced to a location almost “common-sense”. However there are multiple where 30, 60, or 90 days’ worth of data is stored for important considerations that must be taken into fast restore in the case of an individual file corruption account before infrastructure selection. event (one of the more common types of data disasters). 1. A storage director must facilitate clustering to allow VTLs have also been enhanced with additional storage the handling of large quantities of data via clustered services, including VTL-to-remote VTL replication across nodes, ensuring adequate connectivity for both data WANs, de-duplication or compression services, data sources and converged data storage infrastructure vaulting to Cloud Object storage and even as the place while streamlining the management of both the to apply encryption to data. directors themselves and the tiered policies they The VTL is already a location where much of the implement. corporate data is retained and where tiered policy- 2. The storage director device must natively connect based data management services can be readily to all servers and storage in the enterprise, ideally applied. It makes sense that these platforms become with no middleware or agents or third party software. a “storage services director,” a location where data This is a key criterion; the storage director must protection, data management and tiered archiving could be capable of being dropped into any complex be applied to data assets per pre-established policies. of servers and storage devices, supporting all With tiered policy-based data management individual connectivity protocols, to minimize disruptions and pools of data can be retained in an appropriate, cost simplify implementation.

53 54 3. Being policy-based, this solution must enable multiple host platforms based on business criteria and users to place data from different applications importance to business resiliency and restoration. This and host servers into data pools with different is intelligent data management! retention, location and replication policies. This In enterprises with multiple host platforms – HPE means that data can be retained in the most cost NonStop NB, NS and now NonStop X servers, HPE effective manner. This storage director must be Open VMS, Windows and VMware running HPE Data storage device or location “agnostic”. Protector, IBM zOS mainframes, IBM AS/400s iOS 4. Secure vaulting (preferably employing built-in (now IBM PowerSystems), among others – Storage encryption) of data to any cloud object storage is Director enables sharing storage technologies crucial. A tiered policy-based device must allow otherwise dedicated to each host platform. Such customers to vault appropriate data to the cloud storage technologies can include existing enterprise for a second copy or disaster recovery while storage disk, HPE StoreOnce, EMC Data Domain, and sending other, perhaps more essential, data to Quantum DXI data de-duplication devices, physical tape, local disk or tape, all concurrently, all securely, Cloud Object Storage, or any combination of storage with replication rates that meet the requirements technologies concurrently, as dictated by individual data management needs. Such a converged approach of the enterprise. improves storage performance, enables consolidation, 5. The storage director device must be specifically and can lead to measureable savings on a “per TB” of designed for fault-tolerant, high availability retained data. computing environments, given the centralized In conclusion, there is no one-size-fits-all solution for location of such a device in the data stream. data protection. A successful strategy typically involves the assignment of a combination of data protection Is this “Storage Director” services to the data assets of a given business process functionality available today? based on recovery objectives, technology availability, and budget. Yes, as a matter of fact it is, with Tributary Systems’ aptly named Storage Director! Glenn Garrahan is Director of HP Business for Tributary Systems, Inc. Previously Glenn spent 17 years as a Program With Storage Director, enterprises can tier stored data and data policies down to individual data volumes on and Product Manager with HP NonStop.

53 54 Bring Your Own Things Luther Martin

ust like the widespread use probably be less than eight years before personal use of personally owned mobile of IoT technology is widespread. If it has not already happened, there will soon be more IoT devices than devices led to the “bring there are people, and many of these devices will be your own device” (BYOD) carried or worn in the workplace by employees. There phenomenon, the rapid are currently no standards for BYOT security, and there is no army of BYOT consultants yet. So it looks acceptance and proliferation of the J like it will be up to us to figure out best practices for internet of things (IoT) will probably lead understanding the issues around the BYOT phenomenon to a situation where employees bring their and how to effectively manage it. This may turn out to be trickier than it was for the BYOD problem. personal IoT devices into the workplace. This will almost certainly cause security and privacy People typically understand that there can be lots issues, and the problems caused by these will probably of sensitive data on a smart phone or tablet, so they be cheaper and easier to solve if they are addressed easily understand the need to protect the information before the bring your own things (BYOT) phenomenon on these devices. They easily understand that the grows to the point where it becomes difficult to manage. personal information about the people on their email or text messaging contact list is sensitive. They easily The term “BYOD” was probably first used in 2009 understand that the content of any emails or text to describe the situation where employees were messages that they have sent is sensitive. And they using their personal smart phones, tablets and similar easily understand that work-related documents and devices in the workplace. BYOD quickly gained in spreadsheets are sensitive. popularity. Now, there are now lots of attempts to define best practices for BYOD and consultants that With IoT devices, however, it is not as clear that the specialize in the security and privacy issues around information that they carry needs to be protected. BYOD are fairly common. The US government had Can even the world’s most clever hacker really find a even published a standard that describes how BYOD way to exploit how many steps you have taken today should be managed in a business environment by or what songs you are listening to (both of which are 2016. Today, just eight years later, BYOD is the norm managed by various IoT devices)? Maybe they cannot, instead of the exception. but that is not the most serious security and privacy issue around BYOT. Instead, the biggest risk may be Technology is changing faster today that it was eight in the ways in which a clever hacker can find a way years ago, so the adoption of BYOT will probably to reconstruct sensitive information from two or more happen faster than the adoption of BYOD, so it will

55 56 types of sensitive data, each of which looks perfectly So it seems reasonable to consider data on any IoT innocent by itself. The ability to do this is very similar device as “quasi-sensitive,” and protect it just like it to the way in which hackers can combine different was actually sensitive data. Doing this is probably a types of information to uniquely identify an individual, good first step towards managing the loss of sensitive even without any information that uniquely identifies information that may accompany the widespread use of the individual. IoT devices, particularly when they are part of the future Information that does not uniquely identify an in which BYOT becomes widespread. BYOT may be the individual but which can be combined with other future, but it does not have to be a future in which lots similar pieces of information is called a “quasi- of sensitive data is inadvertently revealed to hackers, identifier.” Names, medical record numbers, or Social and protecting absolutely any information managed by Security numbers are enough to uniquely identify an IoT devices is a good first step in that direction. individual. Information like date of birth, gender or home ZIP code are not enough to uniquely identify Luther Martin is a Distinguished Technologist an individual, but research suggests that these three pieces of data together are enough to uniquely at Hewlett Packard Enterprise and is a widely- identify about 87% of the population of the US. published author in a multi-phase career Similarly, it is very likely that clever hackers can find beginning as Cryptologic Mathematician at the useful patterns in combinations of data from IoT devices. NSA. He is also an internationally-recognized Each of the types of data by themselves may seem to expert in cryptography and key management. be of no value, but a clever analysis of two or more He has worked in the information security of them together may provide a surprising amount of industry for over 25 years. The first 11 of these information. Some combinations will probably reveal were doing what might be called “white hat things that would be considered sensitive. Some might hacking” for government and commercial even reveal data that is regulated by one or more of the data security and privacy laws that currently complicate clients; the balance have involved developing the lives of CIOs. information security products.

55 56 Real-world Customers Use HPE Storage To Help Drive Business Growth Lainie Guthrie Competitive Programs

market at an ever-increasing pace, it’s imperative that Change: A requirement for you regularly evaluate your IT strategy and make the driving business growth adjustments your business demands. he pace of technology has revolutionized the way we work, shop, and engage. Just Making tough choices twenty years ago, clouds only floated in Sometimes, it’s the changing market and competitive the sky, and smartphones didn’t exist. conditions that force change. At other times, Bleeding-edge technology was being it’s internal business dynamics or mergers and Table to visit your local coffee shop and log onto Wi- acquisitions. And at other times, the decisions Fi. Now life without instant access to the internet is and choices of our suppliers drive us to reassess unimaginable for millions, including those in third-world longstanding strategies and adjust our course to countries. And in the next 10 years, we’ll see even accommodate customers, retain business, and more change as our world becomes more intelligent remain competitive. and app-driven. In light of the new Dell Technologies, many former “Progress is impossible without change,” the loyal Dell and EMC customers have reassessed their playwright George Bernard Shaw once said, “and IT strategy and are looking to HPE to help accelerate those who cannot change their minds cannot change innovation, enable transformation, and drive anything.” With new, innovative technologies coming to business growth. 57 58 Here are some of their stories: Despite being a loyal EMC customer for more than 10 years, chose HPE 3PAR StoreServ 8450 A small European telecommunication all-flash arrays, which offer a range of options that service provider delivers scalable support true convergence of block and file protocols, cloud services all-flash array performance, and the use of spinning media to further reduce costs. The arrays provide Previously a Dell customer, CallHosted signed up outstanding versatility, performance, and density, a new client who wanted to resell scalable cloud thereby reducing acquisition and operational costs services. As a relatively small service provider by up to 75%—all extremely compelling factors that focused on the SMB market, CallHosted needed to quickly find a way to deliver the required services without incurring “Together with Hewlett Packard Enterprise, huge, upfront infrastructure CallHosted is able to deliver three things expenditures. that are critical to its customers: flexibility, With their existing vendors unable to propose convenience, and competitive prices.” a cost-effective solution, —Paul Smissaert, CEO of CallHosted CallHosted turned to HPE, who proposed an innovative solution using HPE Hyper Converged 240 servers appealed to Microsoft. In addition, HPE’s proven and HPE StoreOnce 2700 storage as the building architecture and cloud vision provided the confidence blocks. Leveraging HPE Flexible Capacity to deliver that HPE was the right partner over the long term. on-demand capacity that combines the agility and economics of the public cloud and the security and Make the Smart Choice: performance of on-premises IT, CallHosted is now “Progress is impossible without change” able to rapidly roll out new services and offer their customers a reliable, secure, and highly scalable Considering the fact that many companies—from platform with a consumption-based IT payment model SMBs to Fortune 1000 enterprises—are reassessing that aligns cash flow to actual capacity usage. their IT strategy and partnering with HPE to deliver innovative, next-generation IT infrastructure, maybe “Together with Hewlett Packard Enterprise, it’s time for a change. CallHosted is able to deliver three things that are critical to its customers: flexibility, convenience, To find out what HPE can do for your business, and competitive prices.”—Paul Smissaert, CEO of talk to your HPE account rep or HPE channel partner CallHosted[1] today, and visit hpe.com/info/hpereadynow.

An IT giant swaps out their storage infrastructure as they build for the future To support their “mobile first, cloud first” vision, Study: The State of IoT Today Microsoft has been aggressively moving applications New research from Aruba, a Hewlett Packard to their Azure platform. But to support future Enterprise company, reveals how the Internet of growth, the Microsoft Cloud Managed Services team Things became the nervous system of business. needed to transform their entire IT architecture READ MORE > and infrastructure to reduce complexity, improve performance, simplify management, and reduce costs. 57 58 2016 XOS Ad.pdf 1 6/15/2016 3:31:04 PM

XYGATE ® Object Security A single solution for complete control of Safeguard and OSS security Sophistication

RBAC for HPE NonStop - Minimize Security Administration Overhead - Maximize Security, Control and Audit

C "Simplicity is the ultimate sophistication" Leonardo da Vinci

M

Y

CM

MY

CY

CMY

K

Learn more at Manage Access xypro.com/XOS to Information

©201659 XYPRO Technology Corporation. All rights reserved. PB Brands mentioned are trademarks of their respective companies