Data-CentricInnovation Summit

DAN MCNAMARA SENIOR VICE PRESIDENT GENERAL MANAGER, PROGRAMMABLE SOLUTIONS GROUP Devices / edge network Cloud/data center

Removing data Bottlenecks with Fpgaacceleration

#IntelDCISummit FPGA High throughput I/O

1000s of parallel processing units programmable Lower latency higher performance Multi-functional

Business Real-time actionable intelligence at the edge New revenue streams for Communication service providers outcomes Improved total cost of ownership (TCO) in the cloud

#IntelDCISummit $1.9B $1.8B Year-over-year DATA CENTER AND $1.0B COMMUNICATIONS revenue growth 1H ‘18

DATA CENTER 140% ADVANCED EMBEDDED PRODUCTS (28NM, 50% 20NM, 14NM) TOTAL REVENUE 17%

2016 2017 2018 (1H) *Revenue excludes products with integrated FPGAs

#IntelDCISummit Edge/Embedded Enterprise NETworkING Cloud

IA+FPGA Networking Vision Analytics Database Storage Solutions Smart retail AI 5G Security Industrial (IOT) Financial NFV AI Radar/surveillance Health Life Sciences Wireline Transcode

IP & Ecosystem INTEL PARTNERSHIPS: IP, OEM, SI, ISV, VAR

software INTEL+FPGA FLOWS: SOFTWARE-CENTRIC DESIGN METHODOLOGY

Silicon/Boards HW LEADERSHIP: SILICON, BOARDS, FPGA ACCELERATION CARDS

#IntelDCISummit infrastructure acceleration Look-aside acceleration NETWORK | STORAGE | SECURITY AI INFERENCE VIDEO TRANSCODE APPLICATIONS (DATABASE) Desire for lower SEARCH server overhead Intel® ®

processor cores + X MK20

+ X MK20

+ MK20 Applications X + MK20 Infrastructure X

100/200/400 Gbps

#IntelDCISummit Growing list of solution acceleration partners Intel® Programmable Acceleration Card (PAC) Data analytics with Intel® FPGAs AI financial Video processing cybersecurity genomics PRIMERGY Dell R640, R740, R740xd RX2540 M4

*Other names and brands may be claimed as the property of others. #IntelDCISummit 1 1000X CAPACITY INCREASE 5X DECREASE IN LATENCY 5g network challenges EVOLVING 3GPP STANDARDS

Radio Unit Radio Unit Baseband Processing Unit Radio Unit Antenna Radio Unit Analog Layer 2/3 and Physical Layer transport Front-end CPRI, Digital Front-end (DFE) Routers / core network Ethernet

1M+ BASEBAND UNITS IN 2022 5M+ RADIO UNITS IN 2022

1. https://www.techworld.com/apps-wearables/what-is-5g-everything-you-need-know-about-5g-3634921/ *mobile experts base station transceiver forecast, 2018

#IntelDCISummit Network Infrastructure Flexible Cloud Infrastructure Physical Appliances Industry-Standard Servers Single application on dedicated hardware and Decoupled software on proprietary management standard x86 server hardware solution agnostic management edge epc router edge epc router

NFV

MANAGEMENT & ORCHESTRATION > _ TEM/OEM Proprietary OS ASIC, DSP, FPGA, ASSP H/W Accelerators

Compute Storage Networking FPGA

#IntelDCISummit Performance and power Edge workloads

CLOUD High-capacity memory High -peed I/O RAID & 10Gbe SURVEILLANCE

SMART RETAIL edge Video playback Transcoding VMS, Video Analytics & Sensor aggregation INDUSTRY 4.0

device Excellent image quality SMART CLASSROOM Image enhancement Encoding & Analytics

#IntelDCISummit edge Cloud / Data Center

Speech/Translation (RNN) Si ADVANTAGES SUPPORT EVOLVING AI TOPOLOGIES HIGH ON-CHIP MEMORY FOR INCREASED THROUGHPUT LOW LATENCY INFERENCE ENERGY-EFFICIENT INFERENCE

† TOOLKITS OpenVINO™ Foundation Intel® nGraph™ Compiler Open Visual Inference & Neural Network Open-sourced compiler for deep learning model Application Optimization toolkit for inference deployment on Library computations optimized for multiple devices from Developers CPU/GPU/FPGA for TensorFlow*, Caffe* & MXNet* Developers multiple frameworks

#IntelDCISummit With Microsoft’s AI for Earthprogram we are putting our cloud and AI tools in the hands of those working to solve global environmental challenges –a topic that requires combining big data, big compute, and efficient algorithms. Deploying deep neural network models to field-programmable gate array (FPGA) services using “Microsoft Project Brainwave is one super simple way to achieve this. Recently we used this FPGA service to perform land cover mapping of the entire United States, analyzing 10 trillion pixels across 20 TB of aerial imagery. Microsoft Project Brainwave, using Intel FPGAs, scored these 200 million images in entirety in just over 10 minutes for a cost of $42.

Doug Burger Technical Fellow, Azure HW Systems Group”

#IntelDCISummit FPGA

Programmable Input & Programmable Logic Blocks Output Blocks can be programmed to do can be programmed to many functions do many types of I/O . Cost and power reduction path for FPGA customers

Massive Amounts of Programmable Routing . Lower NRE cost and time-to-market for ½ the cost can be programmed to connect ½ the power from anywhere to anywhere ASIC customers . Scalable technology to provide pathway to Structured Fixed Function ASIC Logic Blocks cost reduction for 16nm/10nm/7nm FPGA Fixed Function Input single function & Output Blocks products single function

Fixed Routing point to point interconnect

#IntelDCISummit FPGA versatility addresses evolving needs of data era

IA+FPGA solutions creating unparalleled customer value

Expanded TAM and end-to-end lifecycle solutions with eASIC acquisition

#IntelDCISummit

Statements in this presentation that refer to business outlook, future plans and expectations are forward-looking statements that involve a number of risks and uncertainties. Words such as "anticipates," "expects," "intends," "goals," "plans," "believes," "seeks," "estimates," "continues," "may," "will," “would,” "should," “could,” and variations of such words and similar expressions are intended to identify such forward-looking statements. Statements that refer to or are based on projections, uncertain events or assumptions also identify forward- looking statements. Such statements are based on management's current expectations, unless an earlier date is indicated, and involve many risks and uncertainties that could cause actual results to differ materially from those expressed or implied in these forward-looking statements. Important factors that could cause actual results to differ materially from the company's expectations are set forth in Intel's earnings release dated July 26, 2018, which is included as an exhibit to Intel’s Form 8-K furnished to the SEC on such date. Additional information regarding these and other factors that could affect Intel's results is included in Intel's SEC filings, including the company's most recent reports on Forms 10-K and 10-Q. Copies of Intel's Form 10-K, 10-Q and 8-K reports may be obtained by visiting our Investor Relations website at www.intc.com or the SEC's website at www.sec.gov.

All information in this presentation reflects management’s views as of the date of this presentation, unless an earlier date is indicated. Intel does not undertake, and expressly disclaims any duty, to update any statement made in this presentation, whether as a result of new information, new developments or otherwise, except to the extent that disclosure may be required by law.

#IntelDCISummit AUGUST 8, 2018 | SANTA CLARA, CA Data-CentricInnovation Summit

RAJEEB HAZRA CORPORATE VICE PRESIDENT DATA CENTER GROUP 2014-2017: the bear era Business environment Industry sentiment DCG E&G Revenue vs IDC infra Spend1 IT Infra Spend “We are seeing CIOs increasingly (IDC ) reconsidering data center build-out” January 4, 2014

“…research shows steady drop in on-premise hardware spend ” 2014 2015 2016 2017 April 10, 2016

Are Corporate Data Centers FACTORS Obsolete In The Cloud Era? Macroeconomic Evolving cloud IT infrastructure uncertainty + strategies = decline June 11, 2016

1Source: Intel; IDC Quarterly IT Infra Tracker Q1 2018 #IntelDCISummit *Other names and brands may be claimed as the property of others. 2014-2017: the bear era We believed… BUSINESS TRANSFORMATION is inevitable… and will drive Increased IT Investment

Legacy infra will “age” faster

Enterprise will “go hybrid” and adopt private clouds

AI will drive on-prem infra growth

#IntelDCISummit *Other names and brands may be claimed as the property of others. 2014-2017: the bear era We believed… …and invested in BUSINESS TRANSFORMATION is inevitable… and will drive Accelerating Expanding Increased IT Investment Private / Hybrid Analytics & Cloud Growing AI

Legacy infra will “age” faster

Enterprise will “go hybrid” and adopt private clouds Accelerating AI will drive on-prem infra growth Time to Value

#IntelDCISummit *Other names and brands may be claimed as the property of others. Our strategy is delivering

PRIVATE CLOUD GROWTH 2013: 6% Adoption 2018: 12% Adoption1

CLOUD REPATRIATION 80% Of Companies Report Repatriation Activity2

AI / ANALYTICS ON PREM CPU deployment 2X Growth Rate (’14 – ’16) vs (’17 – ’21)3

1Source: IDC Cloud Infrastructure Tracker 1Q18, June 2018 2Source: IDC Cloud and AI Adoption Survey, January 2018; n=400 #IntelDCISummit 3Source: Intel estimate *Other names and brands may be claimed as the property of others. Our strategy is delivering

PRIVATE CLOUD GROWTH DCG E&G Revenue 2013: 6% Adoption1 2018: 12% Adoption

CLOUD REPATRIATION 80% Of Companies 4% 6% Report Repatriation Activity2

AI / ANALYTICS ON PREM “Server Market Sizzles in Q1, CPU deployment Better Prospects Ahead in 2018” 2X Growth Rate June 4, 2018 (’14 – ’16) vs (’17 – 21)3

1Source: IDC Cloud Infrastructure Tracker 1Q18, June 2018 2Source: IDC Cloud and AI Adoption Survey, January 2018; n=400 #IntelDCISummit 3Source: Intel estimate *Other names and brands may be claimed as the property of others. intel® xeon® Processor: heartbeat of the enterprise Intel® Xeon® Scalable Processor Creating and delivering value Fastest ramp & highest mix to top end skus 65% performance gain across since Intel Xeon processor E5 v2 Family. broadest range of workloads1 Enterprise Segment CPU Mix3 2 Intel Xeon processor v3 Leadership virtualization perf (Haswell) Intel Xeon processor v4 Unified stack for unparalleled (Broadwell) Intel Xeon Scalable processor manageability and RAS consistency (skylake) Q3’18 QTD

Performance results are based on testing as of 04/01/2018 and may not reflect all publicly available security updates. See configuration disclosure for details. No product can be absolutely secure. For more complete information about performance and benchmark results, visit www.intel.com/benchmarks. Configurations 1, 2: see slide Performance Benchmark Disclosure. 3: Source: Intel #IntelDCISummit intel® xeon® Processor: heartbeat of the enterprise Intel® Xeon® Scalable Processor Creating and delivering value Fastest ramp & highest mix to top end skus 65% performance gain across since Intel Xeon processor E5 v2 Family. broadest range of workloads1 Enterprise Segment CPU Mix3 2 Intel Xeon processor v3 Leadership virtualization perf (Haswell) Intel Xeon processor v4 Unified stack for unparalleled (Broadwell) Intel Xeon Scalable processor manageability and RAS consistency (skylake) Q3’18 QTD Platform

innovation Intel Ethernet Intel Omni-Path Fabric Intel Silicon Photonics Intel FPGAs Intel SSDs

Performance results are based on testing as of 04/01/2018 and may not reflect all publicly available security updates. See configuration disclosure for details. No product can be absolutely secure. For more complete information about performance and benchmark results, visit www.intel.com/benchmarks. Configurations 1, 2: see slide Performance Benchmark Disclosure. 3: Source: Intel #IntelDCISummit Enabling revolutionary capabilities

SAP Founder Hasso Plattner SAPPPHIRE 2018 keynote Faster start times for less downtime

51 Min 4 Min 12.5X IMPROVEMENT3 DRAM with SSD Storage Persistent Memory with SSD Storage Increased memory capacity reducing tco TOTAL >3 TB MEMORY PER CPU CAPACITY SOCKET

Performance results are based on testing as of 06/06/2018 and may not reflect all publicly available security updates. See configuration disclosure for details. No product can be absolutely secure. For more complete information about performance and benchmark results, visit www.intel.com/benchmarks. Configurations 3: see slide Performance Benchmark Disclosure #IntelDCISummit Accelerating enterprise ai EXAMPLe: FINANCIAL SERVICES workflow

China Union Pay Deploy Neural Network for Fraud Detection On Intel® Xeon® Processor 60% 20% Increase In Increase in coverage accuracy Without disrupting their workflow1

1Source: https://www.intel.com/content/dam/www/public/us/en/documents/case-studies/union-pay-case-study.pdf #IntelDCISummit *Other names and brands may be claimed as the property of others. Intel® Select Solutions

Private Cloud Analytics Artificial intelligence HPC

Microsoft* SQL Server Microsoft Azure Stack* Windows Server* sds Business Operations Big DL on Apache Spark* Genomics Analytics

Blockchain: Hyperledger MicrosofT* SQL Server Simulation & Modeling VmwareCloud Foundation* Fabric Enterprise Data Warehouse

Red Hat OpenShift* SAP* HANA certified Software defined Container NFVi: Ubuntu* Appliances Visualization

VmwarevSAN* NFVi: Red Hat*

Accelerating intel® architecture innovation into the market

#IntelDCISummit *Other names and brands may be claimed as the property of others. Enabling the exascaleera converged architecture for HPC+AI

New cpumicroarchitecture advanced interconnect novel memory / storage hierarchy high performance converged software #IntelDCISummit winning with …

ZERO DISTANCE FROM OUR CUSTOMERS … Unmatched Long -term Joint Product Global Sales Force Co-Design Innovation

Breakthrough innovations … New Silicon AI Photonics Acceleration

Unmatched Partner Scale … Software Solution Partner Optimization with ISVs & SI Marketing Unmatched capabilities + scale + scope

#IntelDCISummit

NOTICES AND DISCLAIMERS

Statements in this presentation that refer to business outlook, future plans and expectations are forward-looking statements that involve a number of risks and uncertainties. Words such as "anticipates," "expects," "intends," "goals," "plans," "believes," "seeks," "estimates," "continues," "may," "will," “would,” "should," “could,” and variations of such words and similar expressions are intended to identify such forward-looking statements. Statements that refer to or are based on projections, uncertain events or assumptions also identify forward-looking statements. Such statements are based on management's expectations as of April 26, 2018 and involve many risks and uncertainties that could cause actual results to differ materially from those expressed or implied in these forward-looking statements. Important factors that could cause actual results to differ materially from the company's expectations are set forth in Intel's earnings release dated April 26, 2018, which is included as an exhibit to Intel’s Form 8-K furnished to the SEC on such date. Additional information regarding these and other factors that could affect Intel's results is included in Intel's SEC filings, including the company's most recent reports on Forms 10-K and 10-Q. Copies of Intel's Form 10-K, 10-Q and 8-K reports may be obtained by visiting our Investor Relations website at www.intc.com or the SEC's website at www.sec.gov.

All information in this presentation reflects management’s views as of August 8, 2018. Intel does not undertake, and expressly disclaims any duty, to update any statement made in this presentation, whether as a result of new information, new developments or otherwise, except to the extent that disclosure may be required by law. Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure.

Optimization Notice: Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice Revision #20110804 Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit www.intel.com/benchmarks.

Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. For more complete information about performance and benchmark results, visit http://www.intel.com/benchmarks Intel does not control or audit third-party benchmark data or the web sites referenced in this document. You should visit the referenced web site and confirm whether referenced data are accurate. Performance varies depending on hardware, software, and system configuration. For more information, visit http://www.intel.com/go/turbo All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps. © Copyright 2018 Intel Corporation Intel, the Intel logo, Intel Xeon, Intel Optane and Thunderbolt are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.

#IntelDCISummit Performance Benchmark Disclosure

Performance results are based on testing as of dates indicated in detailed configurations and may not reflect all publicly available security updates. See configuration disclosure for details. No product can be absolutely secure Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit www.intel.com/benchmarks. 1. Up to 1.35x on app virtualization – based on Intel internal testing as of 04/01/2018 on SPECvirt_sc* 2013: 1-node, 2x Intel® Xeon® Platinum 8180, Wolfpass platform, Total memory 768 GB, 24 slots / 32 GB/ 2666 MT/s DDR4 RDIMM, HyperThreading : Enable, Turbo: Enable, Storage (boot): 1x 400GB DC3700, Storage (application): 2 * 4TB DC P4500 PCIe NVME, Network devices: 2 x 82599ES dual port 10GbE, Network speed: 10GbE, ucode: 0x043, OS: Red Hat Enterprise Linux* 7.4, Kernel: 3.10.0-693.11.6.el7.x86_64 x86_64 vs. 1-node, 2x Intel® Xeon® cpu E5-2699 v4, Wildcat Pass - S2600WTTS1R, Total memory 512GB, 16 slots / 32 GB/ 2400 MT/s DDR4 RDIMM, HyperThreading : Enable, Turbo: Enable, Storage (boot): 1x 400GB DC3700, Storage (application): 2 * 4TB DC P4500 PCIe NVME, Network devices: 2 x 82599ES dual port 10GbE, Network speed: 10GbE. ucode: 0x02A, OS: Red Hat Enterprise Linux* 7.4, Kernel: 3.10.0-693.11.6.el7.x86_64 x86_64 2. 1.65X Average 2S Performance: Geomean based on Normalized Generational Performance (estimated based on Intel internal testing as of 04/01/2018, on OLTP brokerage benchmark, HammerDB, SPECjbb®2015, SPEC*int_rate_base2017, SPEC*fp_rate_base2017, SPEC*virt_sc 2013, STREAM* triad, LAMMPS, DPDK L3 Packet Forwarding, Intel Distribution for LINPACK) a) Up to 1.35x on app virtualization – based on Intel internal testing as of 04/01/2018 on SPECvirt_sc* 2013: 1-node, 2x Intel® Xeon® Platinum 8180, Wolfpass platform, Total memory 768 GB, 24 slots / 32 GB/ 2666 MT/s DDR4 RDIMM, HyperThreading : Enable, Turbo: Enable, Storage (boot): 1x 400GB DC3700, Storage (application): 2 * 4TB DC P4500 PCIe NVME, Network devices: 2 x 82599ES dual port 10GbE, Network speed: 10GbE, ucode: 0x043, OS: Red Hat Enterprise Linux* 7.4, Kernel: 3.10.0-693.11.6.el7.x86_64 x86_64 vs. 1-node, 2x Intel® Xeon® cpu E5-2699 v4, Wildcat Pass - S2600WTTS1R, Total memory 512GB, 16 slots / 32 GB/ 2400 MT/s DDR4 RDIMM, HyperThreading : Enable, Turbo: Enable, Storage (boot): 1x 400GB DC3700, Storage (application): 2 * 4TB DC P4500 PCIe NVME, Network devices: 2 x 82599ES dual port 10GbE, Network speed: 10GbE. ucode: 0x02A, OS: Red Hat Enterprise Linux* 7.4, Kernel: 3.10.0-693.11.6.el7.x86_64 x86_64 b) Up to 1.45x on server side Java - estimates based on Intel internal testing as of 04/01/2018 on SPECjbb*2015 MultiJVM max-jOPS: # Nodes: 1, # Sockets: 2, SKU: Intel® Xeon® Platinum 8180 Processor, Platform: S2600WF (Wolf Pass), Memory configuration: 12 slots / 32 GB / 2666 MT/s DDR4, Total Memory per Node: 384, Baseline: ucode:0x2000030, Red Hat Enterprise Linux* 7.4, Kernel: 3.10.0-693.el7.x86_64. Update: ucode:0x2000043, Red Hat Enterprise Linux* 7.4, Kernel: 3.10.0-693.11.6.el7.x86_64 vs. # Nodes: 1, # Sockets: 2, SKU: Intel® Xeon® Processor E5-2699 v4, Platform: Wildcat Pass /, Total memory configuration/node: 8 slots / 32 GB / 2400 MT/s DDR4 RDIMM , Total Memory per Node: 256 GB, . Baseline: ucode: 0xB000020, OS: Red Hat Enterprise Linux* 7.4, Kernel: 3.10.0-693.el7. x86_64. Update: ucode: 0xB00002a, OS: Red Hat Enterprise Linux* 7.4, Kernel: 3.10.0- 693.11.6.el7.x86_64 . c) Up to 1.55x on integer throughput performance - estimates based on Intel internal testing as of 04/01/2018 on SPECint*_rate_base2006 : 1-Node, 2 x Intel® Xeon® Platinum 8180M Processor on Wolf Pass SKX with 384 GB Total Memory on Red Hat Enterprise Linux* 7.4 using Benchmark software: SPEC CPU® 2017, Compiler: Intel® Compiler IC18 OEM, Optimized libraries: AVX512. Data Source: Request Number: 40, Benchmark: SPECrate*2017_int_base, Score: 281 Higher is better vs. 1-Node, 2 x Intel® Xeon® Processor E5-2699 v4 on Wildcat Pass with 256 GB Total Memory on Red Hat Enterprise Linux* 7.4 using Benchmark software: SPEC CPU® 2017 v1.2, Optimized libraries: IC18.0_20170901, Other Software: MicroQuill SMART HEAP, Script / config files : xCORE-AVX2. Data Source: Request Number: 40, Benchmark: SPECrate*2017_int_base, Score: 181 Higher is better d) Up to 1.55x on technical compute app throughput - estimates based on Intel internal testing as of 04/01/2018 on SPECfp*_rate_base2006: 1-Node, 2 x Intel® Xeon® Platinum 8180M Processor on Wolf Pass SKX with 384 GB Total Memory on Red Hat Enterprise Linux* 7.4 using Benchmark software: SPEC CPU® 2017, Compiler: Intel® Compiler IC18 OEM, Optimized libraries: AVX512. Data Source: Request Number: 39, Benchmark: SPECrate*2017_fp_base, Score: 236 Higher is better vs. 1-Node, 2 x Intel® Xeon® Processor E5-2699 v4 on Wildcat Pass with 256 GB Total Memory on Red Hat Enterprise Linux* 7.4 using Benchmark software: SPEC CPU® 2017 v1.2, Optimized libraries: IC18.0_20170901, Other Software: MicroQuill SMART HEAP, Script / config files : xCORE-AVX2. Data Source: Request Number: 39, Benchmark: SPECrate*2017_fp_base, Score: 148 Higher is better e) Up to 1.6x on est STREAM - triad - estimates based on Intel internal testing as of 04/01/2018 on STREAM - triad: 1-Node, 2 x Intel® Xeon® Platinum 8180M Processor on Wolf Pass SKX with 384 GB Total Memory on Red Hat Enterprise Linux* 7.4 using Benchmark software: STREAM , Compiler: Intel® Compiler IC17, Optimized libraries: AVX512. Data Source: Request Number: 37, Benchmark: STREAM - Triad, Score: 201.24 Higher is better vs. 1-Node, 2 x Intel® Xeon® Processor E5-2699 v4 on Wildcat Pass with 256 GB Total Memory on Red Hat Enterprise Linux* 7.4 using Benchmark software: STREAM, Optimized libraries: IC16, Other Software: AVX2. Data Source: Request Number: 37, Benchmark: STREAM - Triad, Score: 124.78 Higher is better f) Up to 1.6X higher Oracle database transactions – estimates based on Intel internal testing on HammerDB as of 04/01/2018: 1-Node, 2 x Intel® Xeon® Platinum 8180 Processor, Wolf Pass /S2600WF, Total Memory 768 GB, 24 slots/ 32 GB/2666 MT/s /DDR4 RDIMM, Red Hat Enterprise Linux* 7.4, Kernel: 3.10.0-693.11.6.el7.x86_64, uCode: 0x043, Hammerdb 2.23, Oracle 12.1, SSD DC S3700 series 800 GB, 2 x Intel DC P3700 PCI-E SSD for DATA, 2 x Intel DC P3700 PCI-E SSD for REDO, HT Yes, Turbo Yes. vs. 1-Node, 2 x Intel® Xeon® Processor E5-2699 v4, Wildcat Pass platform, Total Memory 384GB, 24 slots/16 GB/2133 MT/s DDR4 RDIMM, Red Hat Enterprise Linux* 7.4 Kernel: 3.10.0-693.21.1.el7.x86_64, uCode: 0x02A, Hammerdb 2.23, Oracle 12.1, SSD DC S3700 series 800 GB, 2 x Intel DC P3700 PCI-E SSD for DATA, 2 x Intel DC P3700 PCI-E SSD for REDO, HT Yes, Turbo Yes. g) Up to 1.75x on DPDK L3 Packet Forwarding - estimates based on Intel internal testing as of 04/01/2018: 1-node, 2x Intel® Xeon® Platinum 8180, Platform: Neon City, Total Memory 192GB, 12 slots / 16 GB/ 2666 MT/s DDR4 RDIMM, Benchmark: DKDK 17.11 L3fwd Sample App, gcc version 6.3.0, HyperThreading: Yes, Turbo: No, Kingston SUV400S37/240G boot, network devices: 2 x Intel XXV710-DA2, 5.51 firmware. ucode: 0x043, OS: Red Hat Enterprise Linux* 7.4, Kernel: 3.10.0-693.11.6.el7.x86_64 vs. 1-node, 2x Intel® Xeon® E5-2699v4, Mayan City platform, Total Memory 64GB, 8 slots / 8 GB/ 2400 MT/s DDR4 RDIMM, HyperThreading : Yes, Turbo: No, Kingston SUV400S37/240G boot, network devices: 2 x Intel XXV710-DA2, 5.51 firmware. ucode: 0x02a, OS: Red Hat Enterprise Linux* 7.4, Kernel: 3.10.0-693.11.6.el7.x86_64 h) Up to 2.2x on LAMMPS - estimates based on Intel internal measurements as of 04/01/2018: 1-node, 2-sockets of Intel® Xeon® Gold 6148, Platform: Wolf Pass / S2600WF/H48104-850, Memory configuration: 12 slots / 16 GB/ 2666 MT/s DDR4 RDIMM, Total Memory per Node: 192, Hyper-Threading: Yes, Turbo: Off, ucode: x043, OS: Red Hat Enterprise Linux* 7.4, Kernel: 3.10.0-693.11.6.el7.x86_64, Score: 74 vs. 1-node, 2-sockets of Intel® Xeon® E5-2699 v4, Platform: Grantley / S2600WTT/H48298-300, Memory configuration: 8 slots / 16 GB/ 2400 MT/s DDR4 RDIMM, Total Memory per Node: 128, HyperThreading : Yes, Turbo: Off, ucode: 0x02A, OS: Red Hat Enterprise Linux* 7.4, Kernel: 3.10.0-693.21.1.el7.x86_64, Score: 33.3 Higher is better i) Up to 2.2x Linpack throughput - estimates based on Intel internal testing as of 04/01/2018 on Intel® Distribution of LINPACK: 1-Node, 2 x Intel® Xeon® Platinum 8180M Processor on Wolf Pass SKX with 384 GB Total Memory on Red Hat Enterprise Linux* 7.4 OS Kernel: 3.10.0-693.11.6.el7.x86_64, Update uCode: 0x043 using Benchmark software: MP Linpack 2018.0.006, Compiler: l_mpi_2018.1.163, Optimized libraries: AVX512, Array 80000. Data Source: Request Number: 38, Benchmark: Intel® Distribution of LINPACK, Score: 3367.5 Higher is better vs. 1-Node, 2 x Intel® Xeon® Processor E5-2699 v4 on Wildcat Pass with 256 GB Total Memory on Red Hat Enterprise Linux* 7.4 OS Kernel: 3.10.0-693.21.1.el7.x86_64 , uCode: 0x02A using Benchmark software: MP Linpack 2018.0.006, Optimized libraries: l_mpi_2018.1.163, AVX2, Array 80000, Other Software: MicroQuill SMART HEAP, Script / config files : xCORE-AVX2. Benchmark: Intel® Distribution of LINPACK, Score: 1427.23 Higher is better 3. https://blogs.saphana.com/2018/06/12/harnessing-hyperscale-processing-more-data-at-speed-with-persistent-memory

#IntelDCISummit Data-CentricInnovation Summit

NAVEEN RAO CORPORATE VICE PRESIDENT & GENERAL MANAGER ARTIFICIAL INTELLIGENCE PRODUCTS GROUP Ai is exploding Data center logic silicon Tam ~30% cagr $8-10B

Emerging as a critical workload

$2.5B Inference

Training 2017 2022

1. Source: AI Si Server TAM is based on amalgamation of analyst data and Intel analysis, based upon current expectations and available information and are subject to change without notice. #IntelDCISummit AI Is Evolving

Proofs of Concepts → Unlocking real value

#IntelDCISummit AI Is Expanding End point edge Data center

Comprehensive AI portfolio

#IntelDCISummit One Size Does Not Fit all End point edge Data center

IOT SENSORS SERVERS, APPLIANCES & GATEWAYS SERVERS & APPLIANCES (Security, home, retail, industrial…)

Vision & Inference Speech

Most use cases Foundation for AI

SELF-DRIVING VEHICLE

Autonomous Driving Streaming latency- bound systems Built for Deep Learning DESKTOP & MOBILE CONVERGED MOBILITY

Flexible & memory Vision & Inference for bandwidth bound various systems types use cases

Display, video, AR/VR, gestures Vision, speech, AR/VR

#IntelDCISummit Winning Together with Intel AI

Subset of full customer and partner list $1B+ AI Business For Intel Today

Other names and brands may be claimed as the property of others. #IntelDCISummit AI Development Lifecycle

Aggregate Data Development Cycle Inference Inference within broader application

#IntelDCISummit AI Development Lifecycle

Experiment with Tune Hyper- Support Share Label Data Load Data Augment Data Topologies parameters Inference Results

15% 15% 23% 15% 15% 8% 8%

Aggregate Data Development Cycle Inference Inference within broader application

#IntelDCISummit AI Development Lifecycle

Experiment with Tune Hyper- Support Share Label Data Load Data Augment Data Topologies parameters Inference Results

15% 15% 23% 15% 15% 8% 8%

Aggregate Data Development Cycle Inference Inference within broader application

#IntelDCISummit AI Development Lifecycle

Experiment with Tune Hyper- Support Share Label Data Load Data Augment Data Topologies parameters Inference Results

15% 15% 23% 15% 15% 8% 8%

Aggregate Data Development Cycle Inference Inference within broader application

BROUGHT TO LIFE THROUGH DATA SCIENTISTS Research Customize Deploy

#IntelDCISummit Intel® Xeon® Scalable Processors THE FOUNDATION FOR AI

1 5.4x (INT8) INFERENCE

1.4x TRAINING

1.0 PERFORMANCE

2 JULY 2017 JULY 2018 2 INTEL® XEON® PLATINUM 8180 PROCESSOR (CODENAMED: SKYLAKE) Continued Investments in Optimizations to Deliver Increased Performance

1 Intel® Optimization for Caffe Resnet-50 performance does not necessarily represent other Framework performance. 2 Based on Intel internal testing: 1X (7/11/2017), 2.8X (1/19/2018), 1.4x (8/2/2018) and 5.4X (7/26/2018) performance improvement based on Intel® Optimization for Café Resnet-50 inference throughput performance on Intel® Xeon® Scalable Processor. See Configuration Details Slide #36 Performance results are based on testing as of 7/11/2017(1x), 1/19/2018(2.8x), 8/2/2018 (1.4x) & 7/26/2018(5.4) and may not reflect all publicly available security updates. See configuration disclosure for details. No product can be absolutely secure. Optimization Notice: Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other#IntelDCISummit information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit: http://www.intel.com/performance. Intel® Xeon® Scalable Processors INFERENCE THE FOUNDATION FOR AI 11x (INT8) Projected Performance Intel® DL Boost with Vector Neural Network Instruction (VNNI)

1 5.4x (INT8)

1.0 PERFORMANCE

2 3 JULY 2017 JULY 2018 2 FUTURE INTEL® XEON® PLATINUM 8180 PROCESSOR INTEL® XEON® SCALABLE PROCESSOR (CODENAMED: SKYLAKE) (CODENAMED: CASCADE LAKE) Continued Investments in Optimizations to Deliver Increased Performance

1 Intel® Optimization for Caffe Resnet-50 performance does not necessarily represent other Framework performance. 2 Based on Intel internal testing: 1X (7/11/2017), 2.8X (1/19/2018), 1.4x (8/2/2018) and 5.4X (7/26/2018) performance improvement based on Intel® Optimization for Café Resnet-50 inference throughput performance on Intel® Xeon® Scalable Processor. See Configuration Details Slide #36 Performance results are based on testing as of 7/11/2017(1x), 1/19/2018(2.8x), 8/2/2018 (1.4x) & 7/26/2018(5.4) and may not reflect all publicly available security updates. See configuration disclosure for details. No product can be absolutely secure. Optimization Notice: Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other#IntelDCISummit information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit: http://www.intel.com/performance. Intel® Nervana™ NNP L-1000 PURPOSE-BUILT FOR REAL WORLD AI PERFORMANCE

Optimized across memory, bandwidth, utilization and power 3-4x training performance of first-generation NNP product High-bandwidth, low-latency interconnects bfloat16 numerics

First Commercial NNP in 2019

Source: Based on Intel measurements on limited distribution SDV (codenamed: Lake Crest) #IntelDCISummit compared to Intel measurements on NNP-100 simulated product Software Is Essential

ANALYTICS, MACHINE & DEEP LEARNING PRIMITIVES DEEP LEARNING GRAPH COMPILER foundation MKL-DNN clDNN Python DAAL Intel® nGraph™ Compiler Library Developers

Other names and brands may be claimed as the property of others. #IntelDCISummit Software Is Essential

MACHINE LEARNING LIBRARIES DEEP LEARNING FRAMEWORKS libraries Scikit-Learn NumPy MLlib Data Scientists

ANALYTICS, MACHINE & DEEP LEARNING PRIMITIVES DEEP LEARNING GRAPH COMPILER foundation MKL-DNN clDNN Python DAAL Intel® nGraph™ Compiler Library Developers

Other names and brands may be claimed as the property of others. #IntelDCISummit Software Is Essential

TOOLKITS OpenVINO™ Intel® Movidius™ Application Toolkit SDK Developers

MACHINE LEARNING LIBRARIES DEEP LEARNING FRAMEWORKS libraries Scikit-Learn NumPy MLlib Data Scientists

ANALYTICS, MACHINE & DEEP LEARNING PRIMITIVES DEEP LEARNING GRAPH COMPILER foundation MKL-DNN clDNN Python DAAL Intel® nGraph™ Compiler Library Developers

Other names and brands may be claimed as the property of others. #IntelDCISummit Software Is Essential

TOOLKITS OpenVINO™ Intel® Movidius™ Application Toolkit SDK Developers

MACHINE LEARNING LIBRARIES DEEP LEARNING FRAMEWORKS libraries Scikit-Learn NumPy MLlib

Data Scientists Abstraction

ANALYTICS, MACHINE & DEEP LEARNING PRIMITIVES DEEP LEARNING GRAPH COMPILER foundation MKL-DNN clDNN Python DAAL Intel® nGraph™ Compiler Library Developers

Other names and brands may be claimed as the property of others. #IntelDCISummit Software Is Essential

Intel® Movidius™ Future TOOLKITS OpenVINO™ framework Application Toolkit SDK Developers

MACHINE LEARNING LIBRARIES DEEP LEARNING FRAMEWORKS

libraries Scikit-Learn NumPy MLlib n G R A P H - DEEP LEARNING COMPILER Data

Scientists Abstraction

ANALYTICS, MACHINE & DEEP LEARNING PRIMITIVES DEEP LEARNING GRAPH COMPILER GPU

foundation MKL-DNN clDNN Python DAAL Intel® nGraph™ Compiler Library Developers

Other names and brands may be claimed as the property of others. #IntelDCISummit Novartis Drug Discovery

ImageNet

224 x 224 x 3

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/performance. #IntelDCISummit Novartis Drug Discovery 26x larger ImageNet

224 x 224 x 3 1024 x 1280 x 3

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/performance. #IntelDCISummit High Performance At Scale SCALING OF TIME TO TRAIN TOTAL MEMORY USED INTEL® OMNI-PATH ARCHITECTURE, HOROVOD AND TENSORFLOW® 192GB DDR4 PER INTEL® SP 2S XEON® 6148 PROCESSOR 514.4GB

nodes 257.2GB 128.6GB

64.3GB

Speedup compared to baseline baseline to compared Speedup 1.0 measured in time to train in 1 1 in train to time in measured 1.0

1 Node 2 Nodes 4 Nodes 8 Nodes 1 Node 2 Nodes 4 Nodes 8 Nodes Multiscale Convolution Neural Network Optimized Libraries Intel® Omni-Path Architecture

Intel® MKL/MKL-DNN, clDNN, DAAL

§ Configuration: CPU: Intel Xeon 6148 processor @ 2.4GHz, Hyper-threading: Enabled. NIC: Intel® Omni-Path Host Fabric Interface, TensorFlow: v1.7.0, Horovod: 0.12.1, OpenMPI: 3.0.0. OS: CentOS 7.3, OpenMPU 23.0.0, Python 2.7.5 Time to Train to converge to 99% accuracy in model Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/performance. Performance results are based on testing as of 5/25/2018 and may not reflect all publicly# availableIntelDCISummit security updates. See configuration disclosure for details. No product can be absolutely secure. Taboola Chooses Intel® Xeon® Scalable Processors to scale Inference

2.5x INFERENCE IMPROVEMENT 2250 2.5 2000

1750 2.0 “Serving from the CPUs helped us reduce costs, increase 1500 efficiency, and provide better content recommendations.” 1.5 1250 - Ariel Pisetzky, VP of Information Technology

1000 2037 Speedup

Throughput 1.0

750 (recommendations/sec) 500 793 0.5 250

0 0.0 Baseline Intel (TensorFlow with Eigen) Optimized TensorFlow

Performance results are based on testing as of 8/6/2018 and may not reflect all publicly available security updates. See configuration disclosure for details. No product can be absolutely secure Intel does not control or audit third-party benchmark data or the web sites referenced in this document. You should visit the referenced web site and confirm whether referenced data are accurate. Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling, and provided to you for informational purposes. Any differences in your system hardware, software or configuration may affect your actual performance. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. #IntelDCISummit Vibrant AI Ecosystem CROSS VERTICAL oem System integrators

VERTICAL HEALTHCARE FINANCIAL RETAIL TRANSPORTATION NEWS, MEDIA & AGRICULTURE LEGAL & HR ROBOTIC PROCESS SERVICES ENTERTAINMENT AUTOMOATION

HORIZONTAL BUSINESS INTELLIGENCE VISION CONVERSATIONAL BOTS AI TOOLS & CONSULTING AI PaaS & ANALYTCS

Designed To accelerate Customer adoption Engaging With Developers

Open Source Community 1400 NLP Architect 1300 1200 1100 1000 900 800 Coach 700 Distiller 600

GitHub GitHub Stars 500 nGraph 400 300 200 100 0 3 Months from Launch

#IntelDCISummit Engaging With Developers

Open Source AI Academy & Community AI DevCloud 1400 NLP Architect 1300 1200 1100 1000 900 800 Coach 700 Distiller 600

GitHub GitHub Stars 500 nGraph 400 300 200 100 . Trained 110K developers 0 . Engaged with 90 universities 3 Months from Launch . 150k users each month, sharing 800+ AI projects

#IntelDCISummit Engaging With Developers

Open Source AI Academy & AI Developers Community AI DevCloud Conference 1400 NLP Architect 1300 1200 1100 1000 900 800 Coach 700 Distiller 600

GitHub GitHub Stars 500 nGraph 400 300 200 100 . Trained 110K developers . 950 attendees 0 . Engaged with 90 universities . 50+ sessions - 50% by customers, partners & academia 3 Months from Launch . 150k users each month, sharing 800+ AI projects . 90% of sessions standing room only . Global – US, India, Europe, China

#IntelDCISummit Summary

Intel® Xeon® Scalable processors are the foundations for AI, $1B+ business Summary

Intel® Xeon® Scalable processors are the foundations for AI, $1B+ business

Delivering tools and software that simplify the development of AI applications Summary

Intel® Xeon® Scalable processors are the foundations for AI, $1B+ business

Delivering tools and software that simplify the development of AI applications

Investing in cutting-edge, purpose-built silicon; engineered for the future of AI #IntelDCISummit

Configuration Details 1.4x training throughput improvement in August 2018: Tested by Intel as of measured August 2nd 2018. Processor: 2 socket Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz / 28 cores HT ON , Turbo ON Total Memory 376.46GB (12slots / 32 GB / 2666 MHz). CentOS Linux-7.3.1611-Core kernel 3.10.0-693.11.6.el7.x86_64, SSD sda RS3WC080 HDD 744.1GB,sdb RS3WC080 HDD 1.5TB,sdc RS3WC080 HDD 5.5TB , Deep Learning Framework Intel® Optimizations for caffe version:a3d5b022fe026e9092fc7abc7654b1162ab9940d Topology::resnet_50 BIOS:SE5C620.86B.00.01.0013.030920180427 MKLDNN: version: 464c268e544bae26f9b85a2acb9122c766a4c396 NoDataLayer. Measured: 123 imgs/sec vs Intel tested July 11th 2017 Platform: Platform: 2S Intel® Xeon® Platinum 8180 CPU @ 2.50GHz (28 cores), HT disabled, turbo disabled, scaling governor set to “performance” via intel_pstate driver, 384GB DDR4-2666 ECC RAM. CentOS Linux release 7.3.1611 (Core), Linux kernel 3.10.0-514.10.2.el7.x86_64. SSD: Intel® SSD DC S3700 Series (800GB, 2.5in SATA 6Gb/s, 25nm, MLC).Performance measured with: Environment variables: KMP_AFFINITY='granularity=fine, compact‘, OMP_NUM_THREADS=56, CPU Freq set with cpupower frequency-set -d 2.5G -u 3.8G -g performance. Caffe: (http://github.com/intel/caffe/), revision f96b759f71b2281835f690af267158b82b150b5c. Inference measured with “caffe time -- forward_only” command, training measured with “caffe time” command. For “ConvNet” topologies, dummy dataset was used. For other topologies, data was stored on local storage and cached in memory before training. Topology specs from https://github.com/intel/caffe/tree/master/models/intel_optimized_models (GoogLeNet, AlexNet, and ResNet-50), https://github.com/intel/caffe/tree/master/models/default_vgg_19 (VGG-19), and https://github.com/soumith/convnet-benchmarks/tree/master/caffe/imagenet_winners (ConvNet benchmarks; files were updated to use newer Caffe prototxt format but are functionally equivalent). Intel C++ compiler ver. 17.0.2 20170213, Intel MKL small libraries version 2018.0.20170425. Caffe run with “numactl -l“.

5.4x inference throughput improvement in August 2018: Tested by Intel as of measured July 26th 2018 :2 socket Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz / 28 cores HT ON , Turbo ON Total Memory 376.46GB (12slots / 32 GB / 2666 MHz). CentOS Linux- 7.3.1611-Core, kernel: 3.10.0-862.3.3.el7.x86_64, SSD sda RS3WC080 HDD 744.1GB,sdb RS3WC080 HDD 1.5TB,sdc RS3WC080 HDD 5.5TB , Deep Learning Framework Intel® Optimized caffe version:a3d5b022fe026e9092fc7abc7654b1162ab9940d Topology::resnet_50_v1 BIOS:SE5C620.86B.00.01.0013.030920180427 MKLDNN: version:464c268e544bae26f9b85a2acb9122c766a4c396 instances: 2 instances socket:2 (Results on Intel® Xeon® Scalable Processor were measured running multiple instances of the framework. Methodology described here: https://software.intel.com/en- us/articles/boosting-deep-learning-training-inference-performance-on-xeon-and-xeon-phi) NoDataLayer. Datatype: INT8 Batchsize=64 Measured: 1233.39 imgs/sec vs Tested by Intel as of July 11th 2017:2S Intel® Xeon® Platinum 8180 CPU @ 2.50GHz (28 cores), HT disabled, turbo disabled, scaling governor set to “performance” via intel_pstate driver, 384GB DDR4-2666 ECC RAM. CentOS Linux release 7.3.1611 (Core), Linux kernel 3.10.0-514.10.2.el7.x86_64. SSD: Intel® SSD DC S3700 Series (800GB, 2.5in SATA 6Gb/s, 25nm, MLC).Performance measured with: Environment variables: KMP_AFFINITY='granularity=fine, compact‘, OMP_NUM_THREADS=56, CPU Freq set with cpupower frequency-set -d 2.5G -u 3.8G -g performance. Caffe: (http://github.com/intel/caffe/), revision f96b759f71b2281835f690af267158b82b150b5c. Inference measured with “caffe time --forward_only” command, training measured with “caffe time” command. For “ConvNet” topologies, dummy dataset was used. For other topologies, data was stored on local storage and cached in memory before training. Topology specs from https://github.com/intel/caffe/tree/master/models/intel_optimized_models (ResNet-50). Intel C++ compiler ver. 17.0.2 20170213, Intel MKL small libraries version 2018.0.20170425. Caffe run with “numactl -l“.

11X inference thoughput improvement with CascadeLake: Future Intel Xeon Scalable processor (codename Cascade Lake) results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling, and provided to you for informational purposes. Any differences in your system hardware, software or configuration may affect your actual performance vs Tested by Intel as of July 11th 2017: 2S Intel® Xeon® Platinum 8180 CPU @ 2.50GHz (28 cores), HT disabled, turbo disabled, scaling governor set to “performance” via intel_pstate driver, 384GB DDR4-2666 ECC RAM. CentOS Linux release 7.3.1611 (Core), Linux kernel 3.10.0- 514.10.2.el7.x86_64. SSD: Intel® SSD DC S3700 Series (800GB, 2.5in SATA 6Gb/s, 25nm, MLC).Performance measured with: Environment variables: KMP_AFFINITY='granularity=fine, compact‘, OMP_NUM_THREADS=56, CPU Freq set with cpupower frequency-set -d 2.5G -u 3.8G -g performance. Caffe: (http://github.com/intel/caffe/), revision f96b759f71b2281835f690af267158b82b150b5c. Inference measured with “caffe time --forward_only” command, training measured with “caffe time” command. For “ConvNet” topologies, dummy dataset was used. For other topologies, data was stored on local storage and cached in memory before training. Topology specs from https://github.com/intel/caffe/tree/master/models/intel_optimized_models (ResNet-50),. Intel C++ compiler ver. 17.0.2 20170213, Intel MKL small libraries version 2018.0.20170425. Caffe run with “numactl -l“. Configuration Details

2.5x Taboola inference Improvement Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz; 2 Sockets, 56 cores/socket, Hyper-threading ON, Turbo boost OFF, CPU Scaling governor “performance”; RAM: Samsung 192 GB DDR4@2666MHz. (16Gb DIMMS x 12); BIOS: Intel SE5C620.86B.0X.01.0007.062120172125; Hard Disk: INTEL SSDSC2BX01 1.5TB; OS: CentOS Linux release 7.5.1804 (Core) (3.10.0-862.9.1.el7.x86_64) Baseline: TensorFlow-Serving r1.9 -- https://github.com/tensorflow/serving. Intel Optimized TensorFlow: TensorFlow-Serving r1.9 + Intel MKL-DNN + Optimizations. MKL-DNN: https://mirror.bazel.build/github.com/intel/mkl-dnn/archive/0c1cf54b63732e5a723c5670f66f6dfb19b64d20.tar.gz MKLML: https://mirror.bazel.build/github.com/intel/mkl-dnn/releases/download/v0.15/mklml_lnx_2018.0.3.20180406.tgz Performance results are based on testing as of (08/06/2018) and may not reflect all publicly available security updates. No product can be absolutely secure.

#IntelDCISummit Disclosures

Statements in this presentation that refer to business outlook, future plans and expectations are forward-looking statements that involve a number of risks and uncertainties. Words such as "anticipates," "expects," "intends," "goals," "plans," "believes," "seeks," "estimates," "continues," "may," "will," “would,” "should," “could,” and variations of such words and similar expressions are intended to identify such forward-looking statements. Statements that refer to or are based on projections, uncertain events or assumptions also identify forward- looking statements. Such statements are based on management's current expectations, unless an earlier date is indicated, and involve many risks and uncertainties that could cause actual results to differ materially from those expressed or implied in these forward-looking statements. Important factors that could cause actual results to differ materially from the company's expectations are set forth in Intel's earnings release dated July 26, 2018, which is included as an exhibit to Intel’s Form 8-K furnished to the SEC on such date. Additional information regarding these and other factors that could affect Intel's results is included in Intel's SEC filings, including the company's most recent reports on Forms 10-K and 10-Q. Copies of Intel's Form 10-K, 10-Q and 8-K reports may be obtained by visiting our Investor Relations website at www.intc.com or the SEC's website at www.sec.gov.

All information in this presentation reflects management’s views as of the date of this presentation, unless an earlier date is indicated. Intel does not undertake, and expressly disclaims any duty, to update any statement made in this presentation, whether as a result of new information, new developments or otherwise, except to the extent that disclosure may be required by law.

#IntelDCISummit AUGUST 8, 2018 | SANTA CLARA, CA Data-CentricInnovation Summit

Navin Shenoy EXECUTIVE VICE PRESIDENT & GENERAL MANAGER DATA CENTER GROUP

DECREASING COST OF technology COST OF Compute COST OF PERFORMANCE storage INCREASE

2012-2017

2012-2017

2006 2007 2008 2009 2010 2012 2013 2014 2016 2017

2021Data-Centric Si TAM FR OM 2017

~$70B ~$55B ~$30B ~$7B

NETWORK INTEL OPTANE SSDS ADAS ADAS MEMORY 3 D N A N D INDUSTRIAL NETWORK CONNECTIVITY VIDEO DATA CENTER A I RETAIL A I Total Tam A I >$160B 2017 REVENUE Data center Non volatile memory IoT+ ADAS FPGA Data-Centric Si TAM ~$90B ~$75B ~$33B ~$8B ~$70B ~$55B ~$30B ~$7B

NETWORK INTEL OPTANE SSDS ADAS ADAS MEMORY 3 D N A N D INDUSTRIAL NETWORK 9% CAGR 2017-2022 CONNECTIVITY VIDEO DATA CENTER A I RETAIL A I Total Tam A I >$160B 2017 REVENUE Data center Non volatile memory IoT+ ADAS FPGA public | Private | Hybrid

INTEL CLOUD SP REVENUE

of cloud is TAM expansion Increasing need for Custom CPUs B I Z : N E W INTEL CLOUD SP CPU VOLUME consumer

BIZ: ENTERPRISE business CONVERSION 2013 2014 2015 2016 2017

STANDARD CUSTOM CPU CPU

2013 2017 Devices | Things Access | Edge Core Data Center | Cloud

network logic silicon TAM | 2022 AI Data Center Logic Silicon TAM ~30% CAGR

INFERENCE TRAINING 2017 2022 SILICON PHOTONICS

OMNI-PATH FABRIC

ETHERNET Global Data Center Traffic per year

20.6ZB

D C T O USER

D C T O DC

6.8ZB

WITHIN DC 2016 2017 2018 2019 2020 2021 Connectivity Logic Silicon TAM Intel® Omni-Path Fabric ~25% CAGR LEADING HPC FABRICS

Intel® Ethernet #1 MSS H I G H S P EED 1 ETHERNET COMING 2019 C A S CADE G L A C IER SMARTNIC

Intel® Silicon Photonics S I L ICON INTEGRATION S I L ICON MANUFACTURING 2017 2022 S I L ICON SCALE SILICON PHOTONICS

OMNI-PATH FABRIC

ETHERNET Memory DRAM HOT TIER Improving Persistent Memory memory capacity Improving Storage SSD performance SSD WARM TIER Delivering efficient storage Intel® 3D NandSSD

HDD / TAPE COLD TIER Data Center Memory SAM | 2022 SPAR K SQL DS more performance Unique Intel Platform VS. DRAM AT 2.6TB DATA SCALE

APACH E CASSANDR A

3 D XPOINT™ More read More users MEMORY ECOSYSTEM ENABLING transactions Per system MEDIA VS. COMPARABLE SERVER SYSTEM WITH DRAM & NAND PLATFORM NVME DRIVES INTEGRATED SOFTWARE VALUE OF PERSISTENCE MEMORY minutes Three 9S MODULE to to

S T A R T T I M E AVAILABILITY

www.intel.com/benchmarks VP OF PLATFORMS &

SILICON PHOTONICS

OMNI-PATH FABRIC

ETHERNET 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018

SKYLAKE

BROADWELL

HASWELL

IVY BRIDGE

SANDY BRIDGE

WESTMERE | BECKTON

NEHALEM

ALLENDALE | WOLFDALE | DUNNINGTON YORKFIELD | HARPERTOWN DEMPSEY | SOSSAMAN | WOODCREST IRWINDALE | PAXVILLE

NOCONA

PRESTONIA | GALLATIN

FOSTER

TANNER | CASCADES

DRAKE 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018

AI, NETWORK & HPC

SKYLAKE

DATABASE BROADWELL 1 ST CLOUD CUSTOM CPU HASWELL VISUAL CLOUD STORAGE IVY BRIDGE

ENCRYPTION

WESTMERE | BECKTON

NEHALEM

ALLENDALE | WOLFDALE | DUNNINGTON

VIRTUALIZATIONYORKFIELD | HARPERTOWN DEMPSEY | SOSSAMAN | WOODCREST IRWINDALE | PAXVILLE

NOCONA

> 2 S SUPPORT PRESTONIA | GALLATIN

FOSTER

TANNER | CASCADES

DRAKE early ship xeonramp of xeon units shipping program to 1m units volume per quarter

LEADERSHIP PERFORMANCE VS OTHER X 86 OFFERINGS UP TO UP TO UP TO UP TO UP TO

Per Core L3 Packet Fwd High perf. Linpack database memory caching

U L T I M A T E FLEXIBILITY

sockets SKUs Ghz watts price points I N T E L OPTIMIZATION FOR CAFFE RESNET - 50

5.4X INT8 OPTIMIZATIONS

2.8X FRAMEWORK OPTIMIZATIONS

INFERENCE THROUGHPUT (IMAGES/SEC)1.0 F P 3 2

Jul’17 Jan’18 Aug’18 Intel® Xeon® Scalable Processor “Machine learning is a big part of our heritage. It works on GPUs today, but it also works on i n s t a n c e s powered by highly customized Intel Xeon “Inference is one thing we p r o c e s s o r s ” Bratin Saha do, but we do lots more. VP & GM, Machine Learning Platforms That’s why flexibility is Amazon AI - Amazon really essential .” Kim Hazelwood Head of AI Infrastructure Foundation Facebook

INTEL® XEON® PROCESSOR AI WINS “Machine learning is a big part of our heritage. It works on GPUs today, but it also works on In 2017 AI drove i n s t a n c e s powered by highly customized Intel Xeon “Inference is one thing we p r o c e s s o r s ” Bratin Saha do, but we do lots more. VP & GM, Machine Learning Platforms That’s why flexibility is Amazon AI - Amazon really essential .” Kim Hazelwood Head of AI Infrastructure Foundation Facebook

INTEL® XEON® PROCESSOR AI WINS NEXT INTEL® XEON® SCALABLE PROCESSOR

With Intel® OPTANE™ DC PERSISTENT MEMORY

Leadership Performance Support For

Optimized Cache Hierarchy Security Mitigations

Higher Frequencies Optimized Frameworks & Libraries INTRODUCING

INTEL OPTIMIZATION F O R CAFFE RESNET - 50

VECTOR NEURAL NETWORK INSTRUCTION For inference Acceleration

5.4X INT8 OPTIMIZATIONS framework & library support 2.8X FRAMEWORK OPTIMIZATIONS

INFERENCE THROUGHPUT (IMAGES/SEC)1.0 F P 3 2 MKL-DNN Jul’17 Jan’18 Aug’18 Intel® Xeon® Scalable Processor FASTER. EASIER. OPTIMIZED.

TIGHTLY SPECIFIED HW Simplified & SW COMPONENTS evaluation

PRE-DEFINED SETTINGS & Fast & easy SYSTEM-WIDE TUNING to deploy

DESIGNED TO DELIVER Workload OPTIMAL PERFORMANCE optimized

Intel® Select Solution configurations and benchmark results are Intel verified FASTER. EASIER. OPTIMIZED.

TIGHTLY SPECIFIED HW Simplified & SW COMPONENTS evaluation INTEL SELECT SOLUTION ai: INTEL SELECT SOLUTION INTEL SELECT SOLUTION PRE-DEFINED SETTINGS & Fast & easy Big DL Blockchain: SYSTEM-WIDE TUNING to deploy on Apache Spark Hyperledger Fabric SAP HANA Certified Appliance DESIGNED TO DELIVER Workload OPTIMAL PERFORMANCE optimized

Intel® Select Solution configurations and benchmark results are Intel verified Transistors & software & Packaging Architecture Memory interconnects security SOlutions SR. VICE PRESIDENT GM, SILICON ENGINEERING GROUP INTEL

2018 2019 2020

14NM/10NM PLATFORM

1 4 N M 1 4 N M 1 0 N M S H I P P I N G Q 4 ’ 1 8

INTEL OPTANE PERSISTENT NEXT GEN INTEL DLBOOST: MEMORY BFLOAT16 INTEL DLBOOST: VNNI SECURITY MITIGATIONS It’s a new era of data-centric computing FUELED BY CLOUD, NETWORK | 5G | EDGE, ARTIFICIAL INTELLIGENCE The data-centric Opportunity is Massive LARGEST OPPORTUNITY IN INTEL’S HISTORY, OVER $200B TAM BY 2022 Intel has unparalleled assets to fuel growth PORTFOLIO OF LEADERSHIP PRODUCTS TO MOVE, STORE AND PROCESS DATA

Disclosures

Statements in this presentation that refer to business outlook, future plans and expectations are forward-looking statements that involve a number of risks and uncertainties. Words such as "anticipates," "expects," "intends," "goals," "plans," "believes," "seeks," "estimates," "continues," "may," "will," “would,” "should," “could,” and variations of such words and similar expressions are intended to identify such forward-looking statements. Statements that refer to or are based on projections, uncertain events or assumptions also identify forward- looking statements. Such statements are based on management's current expectations, unless an earlier date is indicated, and involve many risks and uncertainties that could cause actual results to differ materially from those expressed or implied in these forward-looking statements. Important factors that could cause actual results to differ materially from the company's expectations are set forth in Intel's earnings release dated July 26, 2018, which is included as an exhibit to Intel’s Form 8-K furnished to the SEC on such date. Additional information regarding these and other factors that could affect Intel's results is included in Intel's SEC filings, including the company's most recent reports on Forms 10-K and 10-Q. Copies of Intel's Form 10-K, 10-Q and 8-K reports may be obtained by visiting our Investor Relations website at www.intc.com or the SEC's website at www.sec.gov.

All information in this presentation reflects management’s views as of the date of this presentation, unless an earlier date is indicated. Intel does not undertake, and expressly disclaims any duty, to update any statement made in this presentation, whether as a result of new information, new developments or otherwise, except to the extent that disclosure may be required by law. Configuration Details 1.48x: Per Core Performance Intel Xeon Platinum 8180: Intel Xeon-based Reference Platform with 2 Intel Xeon 8180 (2.5GHz, 28 core) processors, BIOS ver SE5C620.86B.00.01.0014.070920180847, 07/09/2018, microcode: 0x200004d, HT ON, Turbo ON, 12x32GB DDR4-2666, 1 SSD, Ubuntu 18.04.1 LTS (4.17.0-041700-generic Retpoline), 1-copy SPEC CPU 2017 integer rate base benchmark compiled with Intel Compiler 18.0.2 -O3, executed on 1 core using taskset and numactl on core 0. Estimated score = 6.59, as of 8/2/2018 tested by Intel AMD EPYC 7601: Supermicro AS-2023US-TR4 with 2S AMD EPYC 7601 with 2 AMD EPYC 7601 (2.2GHz, 32 core) processors, BIOS ver 1.1a, 4/26/2018, microcode: 0x8001227, SMT ON, Turbo ON, 16x32GB DDR4-2666, 1 SSD, Ubuntu 18.04.1 LTS (4.17.0-041700-generic Retpoline), 1-copy SPEC CPU 2017 integer rate base benchmark compiled with AOCC ver 1.0 -Ofast, -march=znver1, executed on 1 core using taskset and numactl on core 0. Estimated score = 4.45, as of 8/2/2018 tested by Intel

3.20x: High Performance Linpack Intel Xeon Platinum 8180: Intel Xeon-based Reference Platform with 2 Intel Xeon 8180 (2.5GHz, 28 core) processors, BIOS ver SE5C620.86B.00.01.0014.070920180847, 07/09/2018, microcode: 0x200004d, HT ON (1 thread per core), Turbo ON, 12x32GB DDR4-2666, 1 SSD, Ubuntu 18.04.1 LTS (4.17.0-041700-generic Retpoline), High Performance Linpack v2.1, compiled with Intel(R) Parallel Studio XE 2018 for Linux, Intel MPI and MKL Version 18.0.0.128, Benchmark Config: Nb=384, N=203136, P=1, Q=2, Q=4, Score = 3507.38GFs, as of July 31, 2018 tested by Intel AMD EPYC 7601: Supermicro AS-2023US-TR4 with 2 AMD EPYC 7601 (2.2GHz, 32 core) processors, SMT OFF, Turbo ON, BIOS ver 1.1a, 4/26/2018, microcode: 0x8001227, 16x32GB DDR4-2666, 1 SSD, Ubuntu 18.04.1 LTS (4.17.0-041700-generic Retpoline), High Performance Linpack v2.2, compiled with Intel(R) Parallel Studio XE 2018 for Linux, Intel MPI version 18.0.0.128, AMD BLIS ver 0.4.0, Benchmark Config: Nb=232, N=168960, P=4, Q=4, Score = 1095GFs, as of July 31, 2018 tested by Intel

1.85x: Database Intel Xeon Platinum 8180: Intel Xeon-based Reference Platform with 2 Intel Xeon 8180 (2.5GHz, 28 core) processors, BIOS ver SE5C620.86B.0X.01.0115.012820180604, microcode: 0x2000043, HT ON, Turbo ON, 24x32GB DDR4-2666, 1 x Intel DC P3700 PCI-E SSD (2TB, 1/2 Height PCIe 3.0, 20nm, MLC), Red Hat Enterprise Linux 7.4 (3.10.0-693.11.6.el7.x86_64 IBRS), HammerDB ver 2.3, PostgreSQL ver 9.6.5, Score = 2,250,481 tpm, as of 3/15/2018 tested by Intel AMD EPYC 7601: HPE Proliant DL385 Gen10 with 2 AMD EPYC 7601 (2.2GHz, 32 core) processors, ROM ver 1.06, microcode: 0x8001227, SMT ON, Turbo ON, 16x32GB DDR4-2666, 1 x Intel DC P3700 PCI-E SSD (2TB, 1/2 Height PCIe 3.0, 20nm, MLC), Red Hat Enterprise Linux 7.4 (3.10.0-693.21.1.el7.x86_64 Retpoline), HammerDB ver 2.3, PostgreSQL ver 9.6.5, Score = 1,210,575 tpm, as of 4/12/2018 tested by Intel

1.45x: Memcached (Memory Object Caching) Intel Xeon Platinum 8180: Intel Reference Platform with 2 Intel Xeon 8180 (2.5GHz, 28C) processors, BIOS ver SE5C620.86B.00.01.0014.070920180847, 07/09/2018, microcode: 0x200004d, HT ON, Turbo ON, 12x32GB DDR4-2666, 1SSD, 1 40GbE PCIe XL710 Adapter, Ubuntu 18.04.1 LTS (4.17.0-041700-generic Retpoline), Memcached using YCSB benchmark Workloadc, YCSB 0.16.0, Memcached v1.5.9, Max throughput (ops/sec) with P99 latency < 1ms, Score: 2711265 ops/sec, as of 8/2/2018 tested by Intel AMD EPYC 7601: Supermicro AS-2023US-TR4 with 2 AMD EPYC 7601 (2.2GHz, 32C) processors, BIOS ver 1.1a, 4/26/2018, microcode: 0x8001227, SMT ON, Turbo ON, 16x32GB DDR4-2666, 1SSD, 1 40GbE PCIe XL710 Adapter, Ubuntu 18.04 LTS, (4.17.0-041700-generic Retpoline), Memcached using YCSB benchmark Workloadc, YCSB 0.16.0, Memcached v1.5.9, Max throughput (ops/sec) with P99 latency < 1ms, Score: 1862841 ops/sec, as of 8/2/2018 tested by Intel

1.72x: L3 Packet Forwarding Intel Xeon Platinum 8180: Supermicro X11DPG-QT with 2 Intel Xeon-SP 8180 (2.5GHz, 28C) processors, BIOS ver 2.0b, microcode: 0x2000043, 12x32GB DDR4-2666, 1 SSD, 2x Intel XXV710-DA2 PCI Express (2x25GbE), DPDK L3fwd sample application (IPv4 LPM, 256B packet size, 625000 flows), DPDK 17.11, Ubuntu 17.10, (4.13.0-31-generic IBRS), HT ON, Turbo OFF, Score= 42.22 Million Packets / second, as of 8/2/2018 tested by Intel AMD EPYC 7601, Supermicro AS-2023US-TR4 with 2 AMD EPYC 7601 (2.2GHz, 32C) processors, BIOS ver 1.1a, microcode: 0x8001227, 16x32GB DDR4-2666, 1 SSD, 2x Intel XXV710-DA2 PCI Express (2x25GbE), DPDK L3fwd sample application (IPv4 LPM, 256B packet size, 625000 flows), DPDK 17.11, Ubuntu 17.10 (4.13.0-36-generic Retpoline), SMT ON, Turbo (core boost) OFF, Score= 24.52 Million Packets / second, as of 8/2/2018 tested by Intel Intel Optane Persistent Memory Configuration Details

Performance results are based on testing: 8X (8/2/2018), 9X Reads/11X Users (5/24/2018), Minutes to Seconds (5/30/2018) and may not reflect all publicly available security updates. No product can be absolutely secure. Results have been estimated based on tests conducted on pre-production systems: 8x (running OAP with 2.6TB scale factor on IO intensive queries), 9X Reads/11X Users (running Cassandra optimized for persistent memory), and Minutes to Seconds (running Aerospike* Hybrid Memory Architecture optimized for persistent memory), and provided to you for informational purposes. AI Performance Configuration Details 1x inference throughput improvement in July 2017: Tested by Intel as of July 11th 2017: Platform: 2S Intel® Xeon® Platinum 8180 CPU @ 2.50GHz (28 cores), HT disabled, turbo disabled, scaling governor set to “performance” via intel_pstate driver, 384GB DDR4-2666 ECC RAM. CentOS Linux release 7.3.1611 (Core), Linux kernel 3.10.0-514.10.2.el7.x86_64. SSD: Intel® SSD DC S3700 Series (800GB, 2.5in SATA 6Gb/s, 25nm, MLC).Performance measured with: Environment variables: KMP_AFFINITY='granularity=fine, compact‘, OMP_NUM_THREADS=56, CPU Freq set with cpupower frequency-set -d 2.5G -u 3.8G -g performance. Caffe: (http://github.com/intel/caffe/), revision f96b759f71b2281835f690af267158b82b150b5c. Inference measured with “caffe time --forward_only” command, training measured with “caffe time” command. For “ConvNet” topologies, dummy dataset was used. For other topologies, data was stored on local storage and cached in memory before training. Topology specs from https://github.com/intel/caffe/tree/master/models/intel_optimized_models (ResNet-50),and https://github.com/soumith/convnet-benchmarks/tree/master/caffe/imagenet_winners (ConvNet benchmarks; files were updated to use newer Caffe prototxt format but are functionally equivalent). Intel C++ compiler ver. 17.0.2 20170213, Intel MKL small libraries version 2018.0.20170425. Caffe run with “numactl -l“. 2.8x inference throughput improvement in January 2018: Tested by Intel as of Jan 19th 2018 Processor :2 socket Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz / 28 cores HT ON , Turbo ON Total Memory 376.46GB (12slots / 32 GB / 2666 MHz). CentOS Linux-7.3.1611-Core, SSD sda RS3WC080 HDD 744.1GB,sdb RS3WC080 HDD 1.5TB,sdc RS3WC080 HDD 5.5TB , Deep Learning Framework Intel® Optimization for caffe version:f6d01efbe93f70726ea3796a4b89c612365a6341 Topology::resnet_50_v1 BIOS:SE5C620.86B.00.01.0009.101920170742 MKLDNN: version: ae00102be506ed0fe2099c6557df2aa88ad57ec1 NoDataLayer. . Datatype:FP32 Batchsize=64 Measured: 652.68 imgs/sec vs Tested by Intel as of July 11th 2017: Platform: 2S Intel® Xeon® Platinum 8180 CPU @ 2.50GHz (28 cores), HT disabled, turbo disabled, scaling governor set to “performance” via intel_pstate driver, 384GB DDR4-2666 ECC RAM. CentOS Linux release 7.3.1611 (Core), Linux kernel 3.10.0-514.10.2.el7.x86_64. SSD: Intel® SSD DC S3700 Series (800GB, 2.5in SATA 6Gb/s, 25nm, MLC).Performance measured with: Environment variables: KMP_AFFINITY='granularity=fine, compact‘, OMP_NUM_THREADS=56, CPU Freq set with cpupower frequency-set -d 2.5G -u 3.8G -g performance. Caffe: (http://github.com/intel/caffe/), revision f96b759f71b2281835f690af267158b82b150b5c. Inference measured with “caffe time --forward_only” command, training measured with “caffe time” command. For “ConvNet” topologies, dummy dataset was used. For other topologies, data was stored on local storage and cached in memory before training. Topology specs from https://github.com/intel/caffe/tree/master/models/intel_optimized_models (ResNet-50),and https://github.com/soumith/convnet-benchmarks/tree/master/caffe/imagenet_winners (ConvNet benchmarks; files were updated to use newer Caffe prototxt format but are functionally equivalent). Intel C++ compiler ver. 17.0.2 20170213, Intel MKL small libraries version 2018.0.20170425. Caffe run with “numactl -l“. 5.4x inference throughput improvement in August 2018: Tested by Intel as of measured July 26th 2018 :2 socket Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz / 28 cores HT ON , Turbo ON Total Memory 376.46GB (12slots / 32 GB / 2666 MHz). CentOS Linux-7.3.1611-Core, kernel: 3.10.0-862.3.3.el7.x86_64, SSD sda RS3WC080 HDD 744.1GB,sdb RS3WC080 HDD 1.5TB,sdc RS3WC080 HDD 5.5TB , Deep Learning Framework Intel® Optimization for caffe version:a3d5b022fe026e9092fc7abc7654b1162ab9940d Topology::resnet_50_v1 BIOS:SE5C620.86B.00.01.0013.030920180427 MKLDNN: version:464c268e544bae26f9b85a2acb9122c766a4c396 instances: 2 instances : https://software.intel.com/en-us/articles/boosting-deep-learning-training-inference-performance-on-xeon-and-xeon-phi) NoDatasocket:2 (Results on Intel® Xeon® Scalable Processor were measured running multiple instances of the framework. Methodology described hereLayer. Datatype: INT8 Batchsize=64 Measured: 1233.39 imgs/sec vs Tested by Intel as of July 11th 2017:2S Intel® Xeon® Platinum 8180 CPU @ 2.50GHz (28 cores), HT disabled, turbo disabled, scaling governor set to “performance” via intel_pstate driver, 384GB DDR4-2666 ECC RAM. CentOS Linux release 7.3.1611 (Core), Linux kernel 3.10.0-514.10.2.el7.x86_64. SSD: Intel® SSD DC S3700 Series (800GB, 2.5in SATA 6Gb/s, 25nm, MLC).Performance measured with: Environment variables: KMP_AFFINITY='granularity=fine, compact‘, OMP_NUM_THREADS=56, CPU Freq set with cpupower frequency-set -d 2.5G -u 3.8G -g performance. Caffe: (http://github.com/intel/caffe/), revision f96b759f71b2281835f690af267158b82b150b5c. Inference measured with “caffe time --forward_only” command, training measured with “caffe time” command. For “ConvNet” topologies, dummy dataset was used. For other topologies, data was stored on local storage and cached in memory before training. Topology specs from https://github.com/intel/caffe/tree/master/models/intel_optimized_models (ResNet-50). Intel C++ compiler ver. 17.0.2 20170213, Intel MKL small libraries version 2018.0.20170425. Caffe run with “numactl -l“. 11X inference thoughput improvement with CascadeLake: Future Intel Xeon Scalable processor (codename Cascade Lake) results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling, and provided to you for informational purposes. Any differences in your system hardware, software or configuration may affect your actual performance vs Tested by Intel as of July 11th 2017: 2S Intel® Xeon® Platinum 8180 CPU @ 2.50GHz (28 cores), HT disabled, turbo disabled, scaling governor set to “performance” via intel_pstate driver, 384GB DDR4-2666 ECC RAM. CentOS Linux release 7.3.1611 (Core), Linux kernel 3.10.0-514.10.2.el7.x86_64. SSD: Intel® SSD DC S3700 Series (800GB, 2.5in SATA 6Gb/s, 25nm, MLC).Performance measured with: Environment variables: KMP_AFFINITY='granularity=fine, compact‘, OMP_NUM_THREADS=56, CPU Freq set with cpupower frequency-set -d 2.5G -u 3.8G -g performance. Caffe: (http://github.com/intel/caffe/), revision f96b759f71b2281835f690af267158b82b150b5c. Inference measured with “caffe time --forward_only” command, training measured with “caffe time” command. For “ConvNet” topologies, dummy dataset was used. For other topologies, data was stored on local storage and cached in memory before training. Topology specs from https://github.com/intel/caffe/tree/master/models/intel_optimized_models (ResNet-50),. Intel C++ compiler ver. 17.0.2 20170213, Intel MKL small libraries version 2018.0.20170425. Caffe run with “numactl -l“.