Innovation in Machine Intelligence

Neural network visualization from POPLARTM GRAPH-VISUALISATION FROM GRAPHCORE POPLARTM TOOLS

GRAPHCORE HAS DEVELOPED A NEW KIND OF HARDWARE THAT LETS INNOVATORS CREATE THE NEXT GENERATION OF MACHINE INTELLIGENCE

2 GRAPHCORE ENABLING MACHINE INTELLIGENCE

• Founded in 2016

• Technology: Intelligence Processor Unit (IPU)

• Team: approaching 400 globally

• Offices: UK, US, China, Norway

• Raised >$320M GRAPHCORE GLOBAL FOOTPRINT

OSLO

BRISTOL (HQ) BRUSSELS LONDON SEATTLE NEW YORK CAMBRIDGE SEOUL BEJING PALO ALTO AUSTIN TOKYO

TAIPEI

4 5 ABOUT US…

Technology Products Investors

Processors and software IPU-Processor PCIe Cards and >$310m in solutions designed for AI Poplar® software stack funding

5 MACHINEMACHINE INTELLIGENCEINTELLIGENCE EVOLUTIONEVOLUTION

STEPSTEP 1 STEPSTEP 2 STEPSTEP 3 SimpleSimple LanguageLanguage - AdvancedAdvanced perceptionperception perceptionperception understandingunderstanding - LearningLearning fromfrom experienceexperience

63 MACHINE INTELLIGENCE COMPUTE EXPONENTIAL…

2000 300,000x growth AlexNet to AlphaGo Zero AlphaGo Zero

1500

1000 Compute Requirements: Adapted from OpenAI article https://openai.com/blog/ai-and-compute/ AlphaZero

500

Xception AlexNet ResNets NMT

2013 2014 2015 2016 2017 2018 2019 7 MACHINE INTELLIGENCE COMPUTE EXPONENTIAL…

2000 300,000x growth AlexNet to AlphaGo Zero 1800

Model size is doubling every 3.5 months driving increasing GPT2 sparsification – more accuracy per parameter 1.55Bn 1500 Legacy processors hinder where Machine Intelligence can go next 1200

1000 Compute Requirements: Adapted from OpenAI article https://openai.com/blog/ai-and-compute/

Model Size: Adapted from (a) Hestness et al. 2017 “Deep learning scaling is predictable, empirically” 600 500 arxiv:1712.00409 BERT Large 330M A NEW APPROACH IS REQUIRED ResNet50 25M

2013 2014 2015 2016 2017 2018 2019 8 LEGACY PROCESSOR ARCHITECTURES HAVE BEEN REPURPOSED FOR ML

IPU Apps and Web/ Graphics and HPC/ / Scalar Vector Graph

9 9 A NEW PROCESSOR IS REQUIRED FOR THE FUTURE

IPU Apps and Web/ Graphics and HPC/ Artificial Intelligence/ Scalar Vector Graph

10 10 Wired – “How might we build systems that function more like a brain? ”

Geoff Hinton – “I think we need to move towards a different type of computer. Fortunately I have one here…” Hinton reaches into his wallet and pulls out a large, shiny silicon chip:

an IPU processor from Graphcore 11 IPU PROCESSOR

12 12 C2 IPU PROCESSOR CARD

2 – COLOSSUS GC2 IPU PROCESSORS CARD-TO-CARD IPU-LINKSTM (2.5TBps) 200 TERA-FLOP MIXED PRECISION IPU COMPUTE @ 315W DELL DSS8440 IPU SERVER

• 8x dual-IPU C2 cards, 16x GC2 IPU-Processors • >1.6 PETAFLOPs IPU Compute with over 100,000 independent programs • High speed 256GB/s card-to-card IPU-LinkTM • 100Gbps Infiniband scale-out • Poplar SDKTM IPU ACHIEVES STATE OF THE ART PERFORMANCE ON TODAYS LEADING EDGE MODELS…

15 BERT-BASE : TRAINING State of the art time to train: 56 hours on IPU @ 20% lower power

“NATURAL LANGUAGE PROCESSING MODELS ARE HUGELY IMPORTANT TO IPU MICROSOFT. WE ARE EXTREMELY EXCITED BY THE POTENTIAL THAT THIS NEW COLLABORATION WITH GRAPHCORE WILL DELIVER GPU FOR OUR CUSTOMERS,” GIRISH BABLANI, VP AZURE COMPUTE (PyTorch) AT MICROSOFT

GPU (TensorFlow*)

0 50hrs 100hrs Time to Train NOTES: BERT-Base | Wikipedia dataset + SQuAD 1.1 (EM) IPU: DSS8440, 7x GraphcoreC2 – customer implementation using Poplar GPU: 8x Leading GPU system using PyTorch and TensorFlow (*estimated) 16 BERT-BASE : INFERENCE 3x higher throughput at 30% lower latency

IPU

GPU

2ms 1ms 0 1,000 2,000

Latency (ms) Throughput (sequences / second)

NOTES: Graphcoreresults on one C2 Card using two IPUs, on SQuAD v1.1 data, GraphcoreC2 customer implementation using Poplar @ 300W TDP NVIDIA results for 1xV100 with TensorRT 6.0 using SQuAD v1.1 data, published 6 November 2019 https://developer.nvidia.com/deep-learning-performance-training-inference. 17 RESNEXT-101 : INFERENCE Lowest Latency Comparison: 43x higher throughput | 40x lower latency Highest Throughput Comparison: 3.4x higher throughput | 18x lower latency 3,000 IPU GPU 2,000 ” WE ARE SEEING A SIGNIFICANT IMPROVEMENT – WITH 3.5X HIGHER PERFORMANCE - IN OUR IMAGE SEARCH CAPABILITY

USING RESNEXT ON IPUS, OUT (samples / second) (samples/

Throughput 1,000 OF THE BOX” ERIC LEANDRI, CEO, QWANT

0 0 25 50

NOTES: Latency (ms) ResNext-101_32x4d | Real data (COCO) IPU: Graphcore C2 (SDK 1.0.49) using ONNX/PopART (Batch Size 2-12) @ 300W TDP 18 GPU using Pytorch FP16 (Batch Size 1-32) @ 300W TDP IPU DELIVERS MASSIVE PERFORMANCE ADVANTAGE ON DIFFICULT MACHINE LEARNING PROBLEMS

19 MCMC PROBABILISTIC MODEL : TRAINING Customer implementation 26x higher throughput “WE WERE ABLE TO TRAIN ONE OF OUR PROPRIETARY PROBABILISTIC MODELS IN 4.5 MINUTES INSTEAD OF 2 HOURS. THAT’S 26X FASTER TIME TO TRAIN THAN OTHER LEADING PLATFORMS“ IPU GEORGE SOKOLOFF, FOUNDER AND CIO CARMOT CAPITAL

GPU

0 1hrs 2hrs Time to Result

NOTES: Graphcorecustomer Markov Chain Monte Carlo Probability model (summary data shared with customer's permission) IPU: GraphcoreGC2 @ 150W TDP GPU @ 300W TDP

20 MCMC PROBABILISTIC MODEL : TRAINING TensorFlow probability model example

8x faster time to train “THE GRAPCORE IPU IS ALREADY ENABLING US TO EXPLORE NEW TECHNIQUES THAT HAVE BEEN INEFFICIENT OR SIMPLY NOT POSSIBLE BEFORE.” IPU DANIELE SCARPAZZA, R&D TEAM LEAD- CITADEL

GPU

0 4hrs 8hrs Time to Result

NOTES: Markov Chain Monte Carlo – Probabilistic model example with TensorFlow Probability, a neural network with 3 fully-connected layers IPU: GraphcoreGC2 @ 150W TDP GPU @ 300W TDP

21 DeepVoice GRAPH VISUALIZATION FROM POPLAR®

POPLAR® expands the ML Framework output to a full compute graph. POPLAR™ : Graph Toolchain USE YOUR EXISTING KNOWLEDGE MODELS POPLAR Graph Libraries POPLAR Framework popnn poputils poplin

popops poprandom popsparse C++ / Python Graph Framework

POPLAR POPLAR Visualization & Debug Tools POPLAR® Compute Graph Toolchain Graph

POPLAR Graph Compiler POPLAR Graph Engine

IPU-Processor IPU Servers and PCIe Card System POPLIBSTM

Highly optimized open source libraries partition work and data efficiently across IPU devices

C / C++ and Python language bindings poputil popops poplin poprandom popnn

Utility functions for Pointwise and Matrix multiply and Random number Neural network building graphs reduction operators functions generation functions (activation fns, pooling, loss)

POPLAR®

github.com/graphcore/poplibs

24 time

i

n

t

e

r-ch sync

i chip

p

sync sync - inter sync syn sync

syn

syn

syn

syn

| | |

| c

c

c

c

c

hoI/O s t I/Ohost

First use of BSP inside a parallel processor parallel a inside BSP of use First

1

IPU

used in parallel computing parallel in used - Widely Google, FB, … FB, Google, –

Easy to program to Easy locks - dead or locks - live no –

3 phases: compute, sync, exchange sync, compute, phases: 3

no concurrency hazards concurrency no

BSP software bridging model bridging software BSP massively parallel computing with with computing parallel massively –

2

IPU (BSP)

hoI/O s t I/Ohost

| | | syn PARALLEL SYNCHRONOUS BULK

i

syn

n

t

e (1 tile abstains) tile (1 sync

c sync

c

r-ch

(1

sync chip -

inter

i

t

p

i

l

e phase compute

syn

a

b

c

st

a i phase exchange

n

s) BULK SYNCHRONOUS PARALLEL (BSP)

Software bridging model for parallel computing

Compute BSP Sync Exchange

10,000s of compute threads All threads are Data is exchanged so that every thread all operating in parallel synchronized has all the data that it needs for the each with all the data that next phase of Compute they need, held locally

BSP phase.1 BSP phase.2 BSP phase.3 26 COMPUTATIONAL GRAPH GRAPH EXECUTION MODEL SYNC

COMMUNICATION

SYNC COMPUTE

SYNC TIME

COMMUNICATION

SYNC COMPUTE

SYNC

COMMUNICATION

SYNC IPU BSP EXECUTION TRACE

COMPUTE EXCHANGE TILE AT SYNC IPU SYNC

RESNET-18 INFERENCE BATCH SIZE 1

28 OPEN-SOURCE GRAPH LIBRARIES

> 50 open-source GRAPH FUNCTIONS example GRAPH FUNCTION 32in_32out_Fully_Connected_Layer available including (matmul, conv, etc) built from…

> 750 optimized COMPUTE ELEMENTS such as (ReduceAdd, AddToChannel, Zero, etc) easily create new GRAPH FUNCTIONS using the library of COMPUTE ELEMENTS modify and create new COMPUTE ELEMENTS

share library elements and new innovations IPU-ACCELERATED MEDICAL IMAGING ON MICROSOFT AZURE INTRACRANIAL HEMORRHAGE

Courtesy: RSNA INTRACRANIAL HEMORRHAGE INFERENCE ON A RESNEXT-50 PRETRAINED MODEL GPU: V100 (300W) IPU: Single Chip (150W) INFERENCE RESULTS VISUALIZATION (MICROSOFT INNEREYE) INFERENCE RESULTS VISUALIZATION (MICROSOFT INNEREYE)

OUR IPU LETS INNOVATORS CREATE THE NEXT BREAKTHROUGHS IN MACHINE INTELLIGENCE KEEP IN TOUCH WITH US

BLOG GRAPHCORE.AI/BLOG

NEWSLETTER GRAPHCORE.AI/NEWS

TWITTER @GRAPHCORE.AI

LINKEDIN LINKEDIN.COM/COMPANY/GRAPHCORE

FACEBOOK @GRAPHCORE.AI

39 Victoria Rege, Director of Strategic Partnerships & Alliances THANK YOU [email protected] Alexander Titterton, Product Support Engineer [email protected]