Unlock The True Potential Of AI For An Intelligent Connected World

FEATURING RESEARCH FROM FORRESTER AI Workloads Demand A New Approach To Infrastructure 2

AI is now main stream and the driving force for the future ‘Intelligent Connected World’, poised to create exponential value across range of Industries. It is pushing the boundaries of compute with complex data-centric workloads requiring developers to create different set of solutions for diverse computing architectures spanning scalar, vector, matrix and spatial. The range of innovative AI hardware-accelerator architectures continues to expand with chip manufacturers introducing specialized chipset architectures such as neural network processing units (NNPUs), field programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs) in addition to graphical processing units (GPUs) for handling different AI workloads at both Edge and Cloud. This diversity adds additional dimension of complexity in developing optimized AI solutions as the choice of platform plays a critical role in building fit for purpose solutions. In addition, any changes to H/W platform choices at a later stage will result in significant cost and time considerations.

There is a transformational shift across the AI development lifecycle that is defining the evolution IN THIS of multi-architecture AI paradigms. We are partnering to drive this transition towards unified, DOCUMENT standards based programming models to empower developers with software tools that can target 1 Unlock The True any processing resource across diverse architectures. Potential Of Ai For An Intelligent TCS is fostering an ecosystem of Technology players to enable enterprises explore avenues of Connected World innovation leveraging this new paradigm of AI development to foray into new markets, create new products and provide cutting edge domain solutions. For instance, leveraging it’s deep domain, 5 Research From technology and business process expertise, TCS is partnering with leading innovators such as Forrester: AI Deep Learning to help enterprises unlock the full potential of AI leveraging the right mix of hardware and software Workloads architectures. Demand A New Approach To AI continues to push the boundaries creating tremendous impact across industries and it beholds Infrastructure a great opportunity for both enterprises and the society at large. However, it requires ecosystems of players across technology, industry and academia to come together to realize true end-to-end 17 About Tata Consultancy value from AI. Services Ltd. (TCS) V. Rajanna

Senior Vice President & Global Head – Technology Business Unit Tata Consultancy Services (TCS)

Rajanna is a key member of senior leadership team at TCS and held various critical roles. He is a multifaceted leader, and, over two and half decades, has left a valuable imprint on Industry, Academia and the Government. His experience spans various countries across the globe, and encompasses areas such as Sales, Delivery, Business Operations, Human Resources, Research & Development, Marketing, and Education. Prior to his current role, Rajanna was heading the Telecom OEM Business Unit and under his astute leadership, revenues from the unit grew by two-and-a-half times. Rajanna nurtured and built the largest Technology customer partnership for TCS. He was the first CEO for TCS in China. 3

INTEL’S MISSION: AI EVERYWHERE “Intel is focused on enabling businesses to spur innovation and discovery through Artificial Intelligence. Our extensive collaboration with TCS combines the best innovations from both companies to define the next wave of AI solutions across industry verticals. The collaboration allows us to accelerate innovation by integrating AI into critical business processes and address ever-increasing business complexity. Together, we are driving advanced AI solutions to enable enterprises to extract maximum value from their IT investment.” - Prakash Mallya | VP & MD SMG Intel India

Intel is a data-centric company that recognizes the transformational power of artificial intelligence (AI), and is investing heavily in the future of AI. Our system-level strategy provides customers with the most diverse portfolio of compute, matched with advanced memory, storage, and communications fabric. We offer the industry’s only mainstream CPUs with AI acceleration built in, and an accessible range of discrete AI accelerators, including edge VPUs, FPGAs, forthcoming discrete GPUs, and domain-specific architectures. But hardware can’t do AI by itself. This high-performance hardware is supported by mature software built on open components optimized to run best on Intel® hardware. Intel® Distribution of OpenVINO™ toolkit greatly simplifies the deployment of models across heterogeneous hardware. Currently, Intel IT is using OpenVINO to simplify defect detection in our manufacturing process.

Intel IT is focused on integrating AI into critical business processes to help the company address exponentially growing business complexity in product development, manufacturing, sales, and supply chain. Our Sales AI platform simplifies account management and recommends actions based on both customer and market activities, while AI-based transformation of our supply chain has reduced time to decision from six months to one week. IT collaboration with product testing teams built AI into Intel’s validation processes, saving significant time and money. Most recently, we worked with Intel’s Client Computing Group to incorporate AI Intel® Dynamic Tuning Technology—into select designs of an upcoming Intel® Core™ processor generation. This new feature will offer AI-based pre-trained algorithms to predict workloads, allow higher turbo boost when responsiveness is needed, and allow extended time in turbo for sustained workloads.

Intel IT’s AI collaboration with Intel’s business units has proven clear, validated business value of more than $1 Billion USD over the last three years. We have hundreds of AI models running every day at Intel. These solutions have helped us achieve predictable business outcomes, better products, and optimized manufacturing processes at scale. Using AI insights also helps Intel reduce product cost and time to market. We are rapidly expanding our AI efforts across Intel as we see the value it has already delivered and the enormous potential. 4

Prakash Mallya

Vice President, Sales and Marketing Group Managing Director, Intel India

Prakash Mallya is Vice President in the Sales and Marketing Group and India country manager at Intel Corporation. He is responsible for developing new growth areas for the company in the region. Mallya joined Intel in 2000 as a business development manager for the financial services segment in India. Earlier in his Intel career, Mallya had overall responsibility for sales, marketing and the enabling of Intel products and solutions across Southeast Asia. Mallya also previously served as country manager for Malaysia and Singapore. As head of sales and marketing operations in those countries, he was responsible for the growth of Intel’s business through channel distribution, local PC and server manufacturers and multinationals. He has held various leadership roles in multiple regions during his two decades at Intel. Mallya holds a bachelor’s degree in electrical and electronics engineering from Regional Engineering College, Tiruchirappalli, and earned his MBA degree from Bharathidasan Institute of Management, Tiruchirappalli, both in India. 5

AI Deep Learning Workloads Demand A New Approach To Infrastructure GPUs Dominate Now, But A Broader Landscape Of AI Chips And Systems Is Evolving Quickly by Mike Gualtieri and Christopher Voce May 4, 2018 | Updated: May 18, 2018

Why Read This Report Key Takeaways One breakthrough of AI is deep learning: a branch AI Deep Learning Workloads Thrive With of machine learning that can uncannily identify Massively Parallel Architectures objects in images, recognize voices, and create AI chips are a collection of traditional and other predictive models by analyzing enterprise emerging options, sometimes sporting thousands data. Deep learning can use regular CPUs, but for of cores, specifically designed to perform serious enterprise projects, data science teams computations conducive to deep learning. Without must use AI chips such as GPUs that can handle AI chips such as graphics processing units massively parallel workloads to more quickly (GPUs), deep learning would not be practical. train and retrain models on large data sets. This GPUs Got This Party Started report will help I&O professionals understand their Nvidia GPUs are the most popular chips for deep AI infrastructure options — chips, systems, and learning. But field programmable gate arrays cloud — to execute on deep learning. (FPGAs) and a parade of new options from vendors such as Intel and startups are on the way.

Buy Now, But Prepare For Obsolescence Enterprises must do AI, therefore they must do deep learning, and therefore they must use AI chips and systems. The AI chips and systems you buy or use in the cloud today will be obsolete in about one year because AI chip innovation is so rapid.

FORRESTER.COM 6 FOR INFRASTRUCTURE & OPERATIONS PROFESSIONALS

AI Deep Learning Workloads Demand A New Approach To Infrastructure GPUs Dominate Now, But A Broader Landscape Of AI Chips And Systems Is Evolving Quickly

by Mike Gualtieri and Christopher Voce with Srividya Sridharan, Michele Goetz, and Renee Taylor May 4, 2018 | Updated: May 18, 2018

Table Of Contents Related Research Documents

AI Is The Fastest-Growing Workload On The AI Is Ready For Employees, Not Just Customers Planet Automation Drives The I&O Industrial Revolution AI Workloads Require AI Chips And Systems Deep Learning: An AI Revolution Started For GPUs Are The Dominant Option For Training, But Courageous Enterprises The Landscape Is Diverse TechRadar™: Automation Technologies, Recommendations Robotics, And AI In The Workforce, Q2 2017 Buy Short-Term, Think Long-Term

What It Means Cleverness Can’t Compete Without Brute Force Share reports with colleagues. Supplemental Material Enhance your membership with Research Share.

Forrester Research, Inc., 60 Acorn Park Drive, Cambridge, MA 02140 USA +1 617-613-6000 | Fax: +1 617-613-5000 | forrester.com

© 2018 Forrester Research, Inc. Opinions reflect judgment at the time and are subject to change. Forrester®, Technographics®, Forrester Wave, TechRadar, and Total Economic Impact are trademarks of Forrester Research, Inc. All other trademarks are the property of their respective companies. Unauthorized copying or distributing is a violation of copyright law. [email protected] or +1 866-367-7378 7

AI Is The Fastest-Growing Workload On The Planet

AI is not one monolithic technology.1 It is composed of building-block technologies, one of which is deep learning.2 Deep learning is a branch of machine learning that can uncannily identify objects in images, recognize voices, and create other predictive models by analyzing enterprise data. Enterprise AI engineering teams use deep learning to build AI models, and application development teams use those models to add AI smarts to applications.3 There’s one important thing about AI deep learning: It has an insatiable appetite for silicon, requiring the compute power to accommodate two types of workloads (see Figure 1):4

›› Training to build models. AI engineers and data scientists use deep learning frameworks such as TensorFlow or Cognitive Toolkit to analyze historical data about a specific domain, such as image data about car accident damage. The algorithms analyze that data and correlate it to existing adjusters’ reports. The result is a trained model that can analyze new images of car accident damage to predict the type of damage and cost to repair.5

›› Inferencing to make decisions based on trained models. Once AI engineers or data scientists create a model, they use it in production applications to make predictions. For example, an insurer can use a trained deep learning model to analyze photographs of property damage to identify the type of damage and estimate the cost of repairs. Think of inferencing as an input/output service — an application passes the necessary data to a service, and the service uses the inferencing model to return a result.

© 2020 Forrester Research, Inc. Unauthorized copying or distributing is a violation of copyright law. [email protected] or +1 866-367-7378 8

FIGURE 1 Two AI Deep Learning Workloads Need Immense Compute Power

Deep learning

Training Inference Goal: Learn to recognize automobile Goal: Use the trained model in an damage as competently as a application to automate human expert insurance adjuster. automobile damage assessments.

Training New data data

Damaged fender

“?”

No Damaged “Damaged fender” damage fender

Source: Adapted graphic from Nvidia

© 2020 Forrester Research, Inc. Unauthorized copying or distributing is a violation of copyright law. [email protected] or +1 866-367-7378 9

AI Workloads Require AI Chips And Systems

This isn’t your grandfather’s analytics. It’s not about querying data. Deep learning algorithms are all math. AI infrastructure must not only accommodate big data; it also must supply massive compute capacity for math operations on vectors, matrices, and tensors.6 Deep learning is not practical without special infrastructure that is conducive to both high volumes of data and high volumes of calculations. That’s why AI infrastructure is necessary. That’s why all the internet giants, including Amazon, Facebook, Google, and Microsoft, have massive investments in AI infrastructure. Forrester defines AI infrastructure as:

Integrated circuits, computer systems, and/or cloud services that are designed to optimize the performance of AI workloads, such as deep learning model training and inferencing.

AI infrastructure is composed of AI chips, AI systems, and AI cloud systems (see Figure 2 and see Figure 3):

›› AI chips cater to specific deep learning and inference demands. Enterprises can run deep learning on regular CPUs, but for more intense enterprise projects, AI engineering teams often employ high-core-count chips such as Nvidia GPUs to train and retrain models on large data sets more quickly.7 GPUs were designed to perform math operations to render complex graphics, but it just so happens that those math operations can also be used for deep learning as well as other math-intensive high-performance compute (HPC) applications, such as simulation.8 But GPUs are no longer the only game in town. Chip vendors such as Intel, cloud providers such as Google (only available in the cloud), and a slew of startups offer alternative chips conducive to deep learning. Intel is also optimizing future Xeons to handle more AI workloads with enhancements to the processor instruction set.9

›› AI systems are packaged infrastructure solutions. AI systems can be as simple as dropping a GPU card into an existing computer system. Many AI engineering teams do just that to an existing workstation. However, for more intense deep learning projects, more compute power is necessary. Vendors such as Cray, Dell, IBM, and Hewlett Packard Enterprise (HPE) have developed AI-specific servers and offerings. These are often based on existing HPC systems and add in one or more processing cards full of GPUs, such as Nvidia’s TESLA P100. Vendors also bundle AI systems with the software necessary for AI engineering teams to do projects. Nvidia offers its own GPU-based DGX systems. Systems integrators also offer services to “build” AI systems. Although AI systems are available as on-premises hardware, most vendors offer access to them in the cloud.

›› AI cloud solutions offer tremendous scalability and pay-as-you-go pricing. Public cloud providers such as Amazon Web Services (AWS), Google, Microsoft, and others offer instances that are powered by CPUs, GPUs, FPGAs, and other options. Google has designed a chip called a tensor processing unit (TPU) that is optimized to use Google’s popular open source machine learning framework — TensorFlow. Microsoft and AWS are reported to be designing their own

© 2020 Forrester Research, Inc. Unauthorized copying or distributing is a violation of copyright law. [email protected] or +1 866-367-7378 10

chips as well for use in the cloud and on devices.10 Enterprises that wish to get started quickly can always choose a cloud option, but remember: Deep learning workloads can quickly consume resources, so expenses can mount.11

FIGURE 2 AI Infrastructure Is Composed Of Three Elements: AI Chips, AI Systems, And AI Cloud

AI chips AI systems AI cloud

AI chips can massively AI systems include clusters of AI AI cloud provides AI systems on parallelize operations amenable chips and additional high- demand, and therefore it is to AI model training and/or performance features, such as instantly scalable. inferencing. fast interconnect and data access.

© 2020 Forrester Research, Inc. Unauthorized copying or distributing is a violation of copyright law. [email protected] or +1 866-367-7378 11

FIGURE 3 AI Infrastructure: Representative Vendors And Products

AI chip vendors AI systems vendors AI cloud vendors

Representative Representative Representative vendors Product(s) vendors Product(s) vendors Product(s)

Graphcore Intelligence Cray CS-Storm, Amazon Web AWS Deep Processing XC Series Services (AWS) Learning Unit (IPU) AMIs, Dell EMC Ready GPU-and IBM PowerAI Solutions FPGA-based instances Intel Xeon, Exxact Exxact FPGAs, Tensor Nervana series Google Cloud TPU; Neural Cloud GPU Network HPE Apollo Processor Systems IBM GPU cloud (NNP), servers Movidius IBM PowerAI VPU Microsoft Azure Deep Lambda Labs TensorBook, Learning Nvidia Tesla GPU Quad, and Virtual Blade Machine Wave Data ow Computing Processing Nvidia DGX Oracle Oracle Cloud Unit (DPU) Systems Infrastructure Bare Metal Wave Wave GPU Computing Systems

GPUS ARE THE DOMINANT OPTION FOR TRAINING, BUT THE LANDSCAPE IS DIVERSE When it comes to AI deep learning, GPUs get all the press. That’s because GPU systems are readily available and dramatically reduce the time necessary to train models. Model training that took days on CPU systems takes hours on GPU systems. But it is still early days for AI and deep learning. Today (see Figure 4):

›› Nvidia GPUs dominate the market for training deep learning models. Nvidia was prescient in seeing the demand for deep learning and has outpaced rival chip manufacturers thus far. The most popular deep learning software frameworks work with Nvidia, and most of the hardware and

© 2020 Forrester Research, Inc. Unauthorized copying or distributing is a violation of copyright law. [email protected] or +1 866-367-7378 12

cloud vendors offer systems that include Nvidia GPUs. Full-stream oil and gas firm Baker Hughes, a GE company, uses Nvidia GPUs to create deep learning models for well planning and to predict machinery failure.12 Forrester has interviewed numerous enterprise customers from a diverse set of industries, including banking, insurance, retail, and healthcare — all use Nvidia GPUs to train models. Most of the public cloud providers, including Microsoft and Amazon, also use Nvidia GPUs to train deep learning, although they use other technologies as well.

›› Emerging application-specific integrated circuits (ASICs) aim to outperform GPUs. Chip and cloud vendors are not ceding the AI market to Nvidia. In addition to Google and its TPUs, a whole host of existing and startup vendors offer or are designing chips to make deep learning model training even faster. Giants such as Intel offer optimized math engines in Xeon, but they also plan to offer accelerator chips purpose-built for deep learning applications. There are too many startups to list, but noteworthy are Graphcore and Wave Computing, which aim to enter the market with purpose-designed chips for deep learning.

›› Inferencing can benefit from different options. While GPUs are the dominant option in training, there are differentiated options for inferencing. Once a model is trained, it can be used in production applications. A trained model has a certain topology that is static until AI engineers or data scientists build another iteration of the model. For example, FPGAs have programmable logic blocks that you can optimize to run trained models faster than GPUs. Intel’s Movidius VPU chips offer lower-power inferencing in edge use cases like surveillance for detection, tracking, and classification.

FIGURE 4 AI Chips Vary In Silicon Architectures

CPUs GPUs FPGAs ASICs

Central processing Graphics processing Field programmable Application-specific units units gate arrays integrated circuits

• Already present in AI • Hundreds of cores • Programmable • Purpose-designed infrastructure; some amenable to architecture ideal for chip architectures to have AI optimized parallelize operations; inferencing on handle AI/deep instruction sets ideal for training deep already-trained learning training • Suitable for learning models models and/or inferencing experimentation and • Existing support for • Special software is workloads modest training popular deep learning required to translate • Vendors that create frameworks like trained model to the these chips often TensorFlow and FPGA’s con gurable label them as IPU, MXNet logic blocks. DPU, NNP, etc., to reect their design and branding.

© 2020 Forrester Research, Inc. Unauthorized copying or distributing is a violation of copyright law. [email protected] or +1 866-367-7378 13

Recommendations

Buy Short-Term, Think Long-Term

Remember when you got a new laptop every other year because the pace of innovation was so rapid? That’s where we are with AI chips, systems, and cloud. The pace of AI infrastructure innovation is fueled by the insane growth of AI, highly competitive chip and cloud vendors, and deep learning software innovations. It doesn’t mean that enterprises should wait for the dust to settle. No. Enterprises have to move forward with AI and, more importantly, make their scarce AI engineering and data science teams as productive as possible by giving them the most performant-possible infrastructure to train AI models. Infrastructure and operations (I&O) pros must collaborate with application development and delivery (AD&D) pros to:

›› Match AI chips with machine learning frameworks. AI chips are impotent without deep learning software that knows how to use them. AI engineering and data science teams must help decision makers understand what deep learning frameworks they are using now and plan to use in the future. It’s highly likely that they are using more than one. Most frameworks support Nvidia GPUs, but as new AI chips appear they may not support the frameworks you are using. Also, some AI chips will be specifically optimized for a single framework. Google TPUs are AI chips optimized for TensorFlow and are currently available only in Google Cloud. AI engineering teams that use TensorFlow may prefer that option because Google claims that TPUs are orders of magnitude faster that TensorFlow workloads running on GPUs.

›› Think “hybrid,” because cloud can get expensive. Cloud often appears to be the perfect future- proof solution for AI since you pay-as-you-go and cloud providers often offer the latest and greatest AI chips and systems. However, cloud can get more expensive than an on-premises solution if your AI engineers are running workloads around the clock and if you already have open space in a data center. Other factors to consider include the time it may take to convince enterprise security and risk teams to let data into the cloud. A hybrid cloud solution is ideal for most enterprises to optimize the speed of experimentation, overall cost, quick access to new technology, and solution time-to-market.

›› Consider AI systems that can leverage future AI chips. Many types of vendors offer AI systems that you can use on-premises or in the cloud, including enterprise systems vendors (Cisco Systems, Cray, Dell, HPE, and IBM), systems integrators, and even Nvidia with its DGX platform. The best of these vendors will architect their systems to replace existing or add new AI chips at a reasonable price. They design these systems to support multiple GPUs and optimize inter-GPU communication.

›› Separate architecture options for training and inferencing. The fact that GPUs are the dominant solution for training doesn’t lock you into using an alternative option for inferencing. For instance, Microsoft trains models using GPUs. It then uses Microsoft Brainwave software to convert to optimize FPGAs in order to rapidly inference the models. Power requirements and algorithm optimization might make ASICs, FPGAs, or even CPUs a more attractive option.

© 2020 Forrester Research, Inc. Unauthorized copying or distributing is a violation of copyright law. [email protected] or +1 866-367-7378 14

What It Means

Cleverness Can’t Compete Without Brute Force

Remember the Planet of the Apes movies? The orangutans are the politicians. Chimpanzees are the scientists. Gorillas are the muscle. Well, AI infrastructures are . . . the gorillas of AI. Enterprises and I&O leaders who wish to leverage AI to remain or become leaders in their industry must equip their AI engineering and data science teams with the best and fastest tools. That certainly means staying abreast of open source innovation and leveraging differentiated enterprise data. But it also means providing those same teams with the fastest possible AI infrastructure to accelerate the AI business innovation life cycle. Why take three days to train one iteration of a deep learning model when you could do it in 1 hour? The algorithms, amount of data, and number of iterations necessary to train a good model will only get more intense. Don’t make data science and AI engineering teams beg I&O for AI infrastructure, or your enterprise will fall behind.

Engage With An Analyst

Gain greater confidence in your decisions by working with Forrester thought leaders to apply our research to your specific business and technology initiatives.

Analyst Inquiry Analyst Advisory Webinar

To help you put research Translate research into Join our online sessions into practice, connect action by working with on the latest research with an analyst to discuss an analyst on a specific affecting your business. your questions in a engagement in the form Each call includes analyst 30-minute phone session of custom strategy Q&A and slides and is — or opt for a response sessions, workshops, available on-demand. via email. or speeches. Learn more. Learn more. Learn more.

Forrester’s research apps for iOS and Android. Stay ahead of your competition no matter where you are.

© 2020 Forrester Research, Inc. Unauthorized copying or distributing is a violation of copyright law. [email protected] or +1 866-367-7378 15

Supplemental Material

COMPANIES INTERVIEWED FOR THIS REPORT We would like to thank the individuals from the following companies who generously gave their time during the research for this report.

Amazon Intel

Google Microsoft

Graphcore NVIDIA

Endnotes

1 Forrester offers two definitions for artificial intelligence: pure AI and pragmatic AI. See the Forrester reportArtificial “ Intelligence: What’s Possible For Enterprises In 2017.” For details on the building block technologies of AI, see the Forrester report “TechRadar™: Artificial Intelligence Technologies, Q1 2017.” 2 For more information on how deep learning is a revolution, see the Forrester report “Deep Learning: An AI Revolution Started For Courageous Enterprises.” 3 See the Forrester report “Deep Learning: An AI Revolution Started For Courageous Enterprises.” 4 By “silicon” we mean integrated circuits also known as chips. 5 See the Forrester report “A Machine Learning Primer For BT Professionals.” 6 In mathematics a vector is a quantity that has a direction and magnitude. In mathematics, a matrix is a rectangular array of numbers, symbols, or expressions that is arranged in rows and columns. In mathematics, tensors are geometric objects that describe linear relations between geometric vectors, scalars, and other tensors. Source: “What is a Tensor?” Dissemination of IT for the Promotion of Materials Science (DoITPoMS), University of Cambridge (https:// www.doitpoms.ac.uk/tlplib/tensors/what_is_tensor.php). 7 GPUs are the most popular AI chips used today. GPUs are also used for graphics, of course, but they are also used for blockchain applications — specifically, mining.

8 Multiple research studies have shown that GPUs are orders of magnitude faster than CPUs to train deep learning models. Source: John Lawrence, Jonas Malmsten, Andrey Rybka, Daniel A. Sabol, and Ken Triplin, “Comparing TensorFlow Deep Learning Performance Using CPUs, GPUs, Local PCs and Cloud,” Semantic Scholar, May 5, 2017 (https://pdfs.semanticscholar.org/42ce/ccb61c35613bc262c47b35e392ec79ac247d.pdf). 9 Intel’s current Xeon Scalable processors include new architecture and instructions that benefit many workloads, including AI deep learning training and inference. 10 Source: Dina Bass and Ian King, “Microsoft pushes further into chip design as it jockeys to be artificial-intelligence leader,” The Seattle Times, July 24, 2017 (https://www.seattletimes.com/business/microsoft-pushes-further-into-chip- design-as-it-jockeys-to-be-artificial-intelligence-leader/) and Jon Swartz and Barron’s, “Amazon Has Designs On A.I. Chips,” Nasdaq, February 25, 2018 (https://www.nasdaq.com/article/amazon-has-designs-on-ai-chips-cm926372). 11 Workloads consume lots of processing power and tend to run for long periods of time. 12 Source: Tony Paikeday, “NVIDIA and Baker Hughes, a GE Company, Pump AI into Oil & Gas Industry,” NVIDIA Blog, January 29, 2018 (https://blogs.nvidia.com/blog/2018/01/29/baker-hughes-ge-nvidia-ai/).

© 2020 Forrester Research, Inc. Unauthorized copying or distributing is a violation of copyright law. [email protected] or +1 866-367-7378 We work with business and technology leaders to develop customer-obsessed strategies that drive growth.

PRODUCTS AND SERVICES ›› Core research and tools ›› Data and analytics ›› Peer collaboration ›› Analyst engagement ›› Consulting ›› Events

Forrester’s research and insights are tailored to your role and critical business initiatives.

ROLES WE SERVE Marketing & Strategy Technology Management Technology Industry Professionals Professionals Professionals CMO CIO Analyst Relations B2B Marketing Application Development B2C Marketing & Delivery Customer Experience Enterprise Architecture Customer Insights ›› Infrastructure & Operations eBusiness & Channel Security & Risk Strategy Sourcing & Vendor Management

CLIENT SUPPORT For information on hard-copy or electronic reprints, please contact Client Support at +1 866-367-7378, +1 617-613-5730, or [email protected]. We offer quantity discounts and special pricing for academic and nonprofit institutions.

Forrester Research (Nasdaq: FORR) is one of the most influential research and advisory firms in the world. We work with business and technology leaders to develop customer-obsessed strategies that drive growth. Through proprietary research, data, custom consulting, exclusive executive peer groups, and events, the Forrester experience is about a singular and powerful purpose: to challenge the thinking of our clients to help them lead change in their organizations. For more information, visit forrester.com. 142531 ABOUT TATA CONSULTANCY SERVICES LTD. (TCS) Tata Consultancy Services is an IT services, consulting and business solutions organization that has been partnering with many of the world’s largest businesses in their transformation journeys for over 50 years. TCS offers a consulting-led, cognitive powered, integrated portfolio of business, technology and engineering services and solutions. This is delivered through its unique Location Independent Agile™ delivery model, recognized as a benchmark of excellence in software development.

A part of the Tata group, India’s largest multinational business group, TCS has over 448,000 of the world’s best-trained consultants in 46 countries. The company generated consolidated revenues of US $22 billion in the fiscal year ended March 31, 2020, and is listed on the BSE (formerly Bombay Stock Exchange) and the NSE (National Stock Exchange) in India. TCS’ proactive stance on climate change and award-winning work with communities across the world have earned it a place in leading sustainability indices such as the Dow Jones Sustainability Index (DJSI), MSCI Global Sustainability Index and the FTSE4Good Emerging Index.

For more information, visit us at www.tcs.com.