Hortonworks Data Platform on OpenPOWER Systems Agenda

. Overview and Roadmap . Hortonworks Client value proposition . HDP on Power with Mellanox . Performance . Reference Architecture

© 2017 Mellanox Technologies 2 Hortonworks Value Proposition

© 2017 Mellanox Technologies 3 The Data Tipping Point

© 2017 Mellanox Technologies 4 Gain Actionable Insights

EXPLORE OPTIMIZE TRANSFORM

Sentiment Analysis Payment Due Tracking Diligence

DATA SINGLE Customer Optimize Next PREDICTIVE Cyber Support Inventories Product Recs Security DISCOVERY Social VIEW ANALYTICS INNOVATE Mapping

Ad Basket Segments Proactive Disaster Investment Placement Analysis Repair Mitigation Planning Call Machine Product M & A Analysis Data Design

Cross- Supply Customer Vendor Inventory Risk Ad Sell Chain Retention Scorecards Predictions Modeling Placement Factory Defect Yields Detecting

Device Data OPEX Data as a Reduction Ingest Service RENOVATE ACTIVE ETL DATA Historical ARCHIVE ONBOARD ENRICHMENT Fraud Records Prevention

M&A M&A Public Rapid Digital Storage Mainframe Ingest Data Reporting Protection Blending Offloads Integration Capture

© Hortonworks Inc. 2011 – 2016. All Rights Reserved

© 2017 Mellanox Technologies 5 HPD is a 100% Open Source Connected Data Platform

Eliminates Risk of vendor lock-in by delivering 100% Apache open source technology Maximizes Community Innovation with hundreds of developers across hundreds of companies Integrates Seamlessly through committed co-engineering partnerships with other leading technologies

© Hortonworks Inc. 2011 – 2016. All Rights Reserved

© 2017 Mellanox Technologies 6 Hortonworks Influences the Apache Community

We Employ the Committers one third of all committers to the Apache® Hadoop™ project, and a majority in other important projects Our Committers Innovate and expand Open Enterprise Hadoop We Influence the Hadoop Roadmap by communicating important requirements to the community through our leaders

APACHE HADOOP COMMITTERS

© 2017 Mellanox Technologies 7 Open Source Optimizes Variety and Cost Efficiencies

DATA VARIETY Hortonworks Employs the Committers one third of all committers to the Apache® HORTONWORKS OPEN SOURCE Hadoop™ project, and a majority in other important projects Eliminates Risk and Ensures Integration PROPRIETARY COST prevents vendor lock-in and speeds ecosystem HADOOP EFFICIENCY adoption of ODPi-compliant core

EDW Unmatched Economics support low cost data-center and cloud RDBMS architectures for Enterprise Apache Hadoop

© 2017 Mellanox Technologies 8 Hortonworks Nourishes the Community and Ecosystem

HORTONWORKS HORTONWORKS HADOOP & BIG DATA PARTNERWORKS COMMUNITY CONNECTION ECOSYSTEM

• Community Q/A Resources • Open Ecosystem of Big Data for • World class partner program • Articles & Code Repos! vendors & end-users • Network of partners providing • Community of (big data) • Advance Apache™ Hadoop® best-in-class solutions developers • Enable more Big Data Apps

© Hortonworks Inc. 2011 – 2016. All Rights Reserved

© 2017 Mellanox Technologies 9 Hortonworks Delivers Proactive Support

Hortonworks SmartSense™ with machine learning and predictive analytics on your cluster

Integrated Customer Portal with knowledge base and on-demand training

© Hortonworks Inc. 2011 – 2016. All Rights Reserved

© 2017 Mellanox Technologies 10 HDP on Power with Mellanox

© 2017 Mellanox Technologies 11 Hortonworks and IBM

Collaborate to Offer Open Source Distribution on Power Systems Latest Hortonworks Data Platform (HDP) to provide IBM customers with more choice in open source Hadoop distribution for big data processing (Las Vegas, NV (IBM Edge) - 19 Sep 2016)

• Modern Data Applications on a Modern Data Platform

• Open on Open: 100% Open Hadoop and Spark on OpenPOWER • Fueling rapid community innovation

• Combined Market Leadership and Reach • Hortonwork’s strong client success, rapid growth and leadership in the Hadoop community • Power's success, large global enterprise install base, and IBM's client focus

Scott Gnau, CTO, Hortonworks at Edge. Youtube: http://bit.ly/2dSOliW Youtube: https://youtu.be/z9X---2z2qY

© 2016 IBM Corporation © 2017 Mellanox Technologies 12 IBM, Hortonworks and Mellanox Combined Value

Unequaled in the Market…

+ +

. OpenPOWER performance leadership . #1 Pure Open Source Hadoop Distribution . #1 provider of High-performance adapters . Flexible, software defined storage . 1100+ customers and 2100+ ecosystem partners (Source: Crehan Research) . #1 Data Science Platform (Source: Gartner) . Employs the original architects, developers and . Only end-to-end InfiniBand and Ethernet provider . #1 SQL Engine for complex, analytical workloads. operators of Hadoop from Yahoo! . Fastest growing Ethernet Switch vendor . Fair and predictable performance . Leader in On-premise and Hybrid Cloud solutions . Zero packet loss . Lower Latency . Dynamic Buffer

IBM adopted Hortonworks Data Platform IBM and Hortonworks Data Platform adopt Hortonworks will adopt and resell IBM Data (HDP) as its core Hadoop distribution and Mellanox Ethernet and InfiniBand as Science Experience (DSX) and IBM Big SQL resell HDP and HDF interconnect solution provider

© 2017 Mellanox Technologies 13 Leading Supplier of End-to-End Interconnect Solutions

Switch / Virtual Protocol Virtual Protocol Interconnect Gateway Interconnect

56/100/200G 56/100/200G Storage Server / InfiniBand InfiniBand Front / Compute Back-End 10/25/40/50/ 10/25/40/50/ 100/200GbE 100/200GbE

© 2017 Mellanox Technologies 14 Mellanox Leads Across Industries

Delivering Highest Return on Investment

5 of Top 6 Global Banks

9 of Top 10 Hyperscale Companies

9 of the Top 10 Oil and Gas Companies

3 of Top 5 Pharmaceutical Companies

10 of Top 10 Automotive Manufacturers

© 2017 Mellanox Technologies 15 Flexibility with HDP on Power Systems

. Scale Up or Out to Meet Evolving Workloads • Scale up each node by exploiting the memory bandwidth and multi-threading • 4X threads per core vs x86 allows you to optimize and drive more workload per node • Offering 4X memory bandwidth vs x86, POWER8 gives you more options as your workloads expand and evolve

. Unmatched Range of Linux Servers • From 1U, 16-core servers up to 16 socket, 192 core powerhouses with industry leading reliability all running standard Linux • Virtualization options to host low cost dev environments or rich, multi-tenant private clouds • Wide range of OpenPOWER servers offered by OpenPower members for on-prem and the cloud

. Accelerated Analytics • Add accelerators (flash, GPU, FPGA) with direct access to processor memory with OpenCAPI

© 2017 Mellanox Technologies 16 Power Systems S822LC for Big Data

Not Just Another Server… : Tesla K80 GPU Accelerator Innovation Pervasive in the Design Linux by Redhat: Redhat 7.2 Linux OS Mellanox: InfiniBand/Ethernet Connectivity in and out of server HGST: Optional NVMe Adapters

Alpha Data with FPGA: Optional CAPI Accelerator Samsung: SSDs & NVMe Hynix, Samsung, Micron: DDR4 IBM: POWER8 CPU

© 2017 Mellanox Technologies 17 Major N. American Food Retailer, implements HDP on IBM POWER

Business need: • Gain a competitive advantage by retaining and analyzing their store level loyalty program data • Bring outsourced analytics back in-house

Solution: • Consolidation of client transaction data into a Hortonworks Data Platform on Linux on IBM Power Systems. • SAP Customer Activity Repository (CAR) application, powered by SAP HANA, connected to the data lake to enable real-time insights. Time to Value Business Benefits: HDP 2.6 running on a cluster of 9 IBM Power System servers • More efficient and flexible in-store experiences for their Full solution deployed by IBM lab services and an IBM Business clients to increase client loyalty and purchases. Partner in < 2 weeks

Trial to production in 2 months

© 2017 Mellanox Technologies 18 Customer Success Story

. Business Problem • Transformational journey resulting in rapid expansion of business models • Technology innovation required to keep up with the business expansion while improving client satisfaction, reducing costs and supporting the company’s green IT initiatives - Existing x86 server sprawl not sustainable

. Solution with Hortonworks, IBM OpenPOWER servers and Sage Solutions Consulting • Embraces the open software and hardware model adopted by Florida Blue • Hortonworks supporting new fraud analytics initiative to reduce costs and client premiums • OpenPOWER to enable smaller datacenter footprint with stronger reliability • High performance interconnect solutions from Mellanox provide ample bandwidth and tested in end-to-end HDP solutions

. Differentiators: • Flexibility – Richest family of Linux servers to match your workload’s scale and reliability needs • Performance and Price/Performance – Leading performance for SQL and Spark workloads • Designed for Cognitive/AI – Obtain your ML/DL results faster with AI on Power servers • TCO – 3X compute and storage infrastructure reduction with Power and Elastic Storage • Open on Open – Leading innovation and choice with open Hadoop on openPOWER • Support – Hortonworks and IBM industry experts with commitment to client success

© 2017 Mellanox Technologies 19 Performance

© 2017 Mellanox Technologies 20 IBM Power S822LC, Hortonworks and Mellanox

. Delivering Leadership in Hadoop Big Data Environments…

• POWER8 and Hortonworks deliver 1.70X the throughput compared to Hortonworks running on x86 – 70% More QpH based on the average response time – complete the same amount of work with less system resources – 41% Reduction on average in query response time – reduced response time enables making business decisions faster. 70% More Throughput • Results are based on IBM internal testing of Power S822LC for Big Data – Compared to x86 published results found at https://hortonworks.com/blog/apache-hive-going-memory- computing/ – Based on 10 representative queries from a standard query workload

• Performance results are based on preliminary IBM Internal Testing of 10 queries (simple, medium, and complex) with varying runtimes running against a 10TB database. The tests were run on 10 x IBM Power System S822LC for Big Data 20 cores / 40 threads, 2 X POWER8 2.92GHz, 256 GB memory, RHEL 7.2,, HDP 2.5.3 compared to the published x86/Hortonworks results running on 10 x AWS d2.8xlarge EC2 nodes running HDP 2.5; details can be found at https://hortonworks.com/blog/apache-hive-going-memory-computing/ . Conducted under laboratory condition, individual result can vary based on workload size, use of storage subsystems & other conditions. Data as of February 28, 2017)

© 2017 Mellanox Technologies 21 Reference Architecture

© 2017 Mellanox Technologies 22 HDP on POWER – Minimum Production Configuration

Client Uplink Client Uplink (Campus (Data Network, optional) Network) Existing client environment

Solution environment

Campus Network (shared EN) System Management Node FSP VLAN x tagged Disk traffic from Worker Node 1 servers Edge Node 1

Worker Node 2 Disk

FSP Master Node 1 FSP Data Worker Node 3 Disk Network FSP Master Node 2 (FSPprivate EN) Worker Node 4 Disk VLAN FSP untagged Master Node 3 traffic from Worker Node 5 Disk servers

Worker Node 6 Disk

Worker Node 7 Disk

Worker Node 8 Disk

Partial Homed (Thin DMZ) Network Topology Shown; Other Topologies Possible and Supported

© 2017 Mellanox Technologies 23 HDP on POWER – Initial Reference Configurations

Worker Node System Mgmt Node Master Node Edge Node Balanced Performance Storage Dense

Server Type 1U S821LC (Stratton) 1U S821LC (Stratton) 1U S821LC (Stratton) 2U S822LC (Briggs) 2U S822LC (Briggs) 2U S822LC (Briggs)

Count (Min / Max) 1 / 1 3 / Any 1 / Any 8 / Any 8 / Any 8 / Any

Cores 8 20 20 22 22 11

Memory 32GB 256GB 256GB 256GB 512GB 128GB

Storage - HDD 2x 4TB HDD 4x 4TB HDD 4x 4TB HDD 12x 4TB HDD 8x 6TB HDD 12x 8TB HDD

Storage - SSD + 4x 3.8TB SSD

LSI MegaRAID 9361-8i LSI MegaRAID 9361-8i LSI MegaRAID 9361-8i LSI MegaRAID 9361-8i LSI MegaRAID 9361-8i Storage Controller Marvell (internal) (2GB cache) (2GB cache) (2GB cache) (2GB cache) (2GB cache)

Network - 1GbE 4 ports (internal) 4 ports (internal) 4 ports (internal) 4 ports (internal) 4 ports (internal) 4 ports (internal)

Network - 10GbE 2 ports 2 ports 2 ports 2 ports 2 ports 2 ports

Switches Additional Config Options: 1 GbE (1x or 2x): Network topologies: Flat, Dual Homed, Partial Homed, Full DMZ • IBM 7120-48E ( G8052) Switch (48x 1GbE + 4x 10GbE ports) Size: POC, min-production (12 node), full rack, multi rack 10 GbE (2x typical, 1x allowed): • IBM 7120-64C (Lenovo G8264) Switch (48x 10GbE + 4x 40GbE), or • IBM 8831-S48 (Mellanox SX1410) Switch (48x 10GbE + 12x 40GbE)

© 2017 Mellanox Technologies 24 HDP on POWER – Reference Architecture

Single rack example – Minimum production configuration Multi-rack example (extensible)

Cable ingress/egress 42 Cable ingress/egress 42 Cable ingress/egress 42 41 41 41 40 Rack to Rack Switch 40 Rack to Rack Switch 40 39 Rack to Rack Switch 39 Rack to Rack Switch 39 38 38 38 37 37 37 Edge Node (8001-12C) 36 Edge Node (8001-12C) 36 36 Worker Node (8001-22C) Master Node (8001-12C) 35 Edge Node (8001-12C) 35 35 Master Node (8001-12C) 34 Master Node (8001-12C) 34 34 Worker Node (8001-22C) Master Node (8001-12C) 33 Master Node (8001-12C) 33 33 A B 32 A Master Node (8001-12C) B 32 A B 32 Worker Node (8001-22C) F R R F 31 F R R F 31 F R R F 31 30 30 30 Worker Node (8001-22C) Worker Node (8001-22C) 29 29 29 Sys Mgmt Node (8001-12C) 28 Sys Mgmt Node (8001-12C) 28 28 27 27 27

Lenovo 7120-48E 1GbE 26 Lenovo 7120-48E 1GbE 26 Lenovo 7120-48E 1GbE 26 Up to 18 worker

PDU PDU PDU PDU PDU Mellanox SX1410 10GbE 25 PDU Mellanox SX1410 10GbE 25 25 nodes per rack Worker Node (8001-22C) Mellanox SX1410 10GbE 24 Mellanox SX1410 10GbE 24 24 possible 23 23 23 Worker Node (8001-22C) Worker Node (8001-22C) 22 22 22 21 21 21 Worker Node (8001-22C) Worker Node (8001-22C) 20 20 20 19 19 19 Worker Node (8001-22C) Worker Node (8001-22C) 18 18 18 C D 17 C D 17 C D 17 Worker Node (8001-22C) Worker Node (8001-22C) Worker Node (8001-22C) F R R F 16 F R R F 16 F R R F 16 15 15 15 Worker Node (8001-22C) Worker Node (8001-22C) Worker Node (8001-22C) 14 14 14 13 13 13 Worker Node (8001-22C) Worker Node (8001-22C) Worker Node (8001-22C) 12 12 12 11 11 11

Worker Node (8001-22C) Worker Node (8001-22C) Worker Node (8001-22C)

PDU PDU PDU PDU PDU 10 PDU 10 10 9 9 9 Worker Node (8001-22C) Worker Node (8001-22C) Worker Node (8001-22C) 8 8 8 7 7 7 Worker Node (8001-22C) Worker Node (8001-22C) Worker Node (8001-22C) 6 6 6 5 5 5 Worker Node (8001-22C) Worker Node (8001-22C) Worker Node (8001-22C) 4 4 4 3 3 3 Worker Node (8001-22C) Worker Node (8001-22C) Worker Node (8001-22C) 2 2 2 Cable ingress/egress 1 Cable ingress/egress 1 Cable ingress/egress 1 © 2017 Mellanox Technologies 25 Mellanox Infrastructure for HortonWorks Choice of Cabling 40GbE / FDR Cabling Speed Switch Cabling Adapter Optics* Length Description FC SX1710 – 8831-NF2 EB27 + 0.5m 40GbE / FDR Copper Cable QSFP EB40 40 GbE EKAL 2@40 EB2J or EB2K 1m 40GbE / FDR Copper Cable QSFP EB41 SX1410 – 8831-S48 EB28 + 2m 40GbE / FDR Copper Cable QSFP EB42 EKAU 10/25 10/40 GbE See lists on right ECBD or EKAL 2@40 40GbE / FDR Optical Cable QSFP EB4A ECBE 5m 40GbE / FDR Optical Cable QSFP EB4B 4610-54T – 8831-S52 1/10GbE LOM 10m 40GbE / FDR Optical Cable QSFP EB4C 15m 40GbE / FDR Optical Cable QSFP EB4D * Optics are IBM Parts only 20m 40GbE / FDR Optical Cable QSFP EB4E 30m 40GbE / FDR Optical Cable QSFP EB4F 50m 40GbE / FDR Optical Cable QSFP EB4G

Internet Flat Internet Dual Home Internet Partial Home Internet DMZ “Thin DMZ” EDW Firewall Firewall Firewall Public Public Firewall Public

Public M E S S M E S S M E FirewallX S S M E S S Private Private Private Mellanox Infrastructure for HortonWorks As you increase the speed of the network, the topology of the PCI slot becomes important. The two topologies that IBM has for the cards and slots in the servers are 1. PCI Gen 3.0 x8 2. PCI Gen 3.0 x16 The important piece is the x8/x16, what does this mean? This is the width of the PCI bus, how much bandwidth can be passed from Network to the CPU. How much network bandwidth can be passed thru these two PCI Slots.

Speed PCI Gen 3.0 x8 - # Ports FC# PCI Gen 3.0 x16 - # Ports FC# 10 GbE 2 2 - EKAU 25 GbE 2 2 - 40 GbE 1 EC3A 2 EC3L / EKAL 50 GbE 1 EKAM*(x16 Card) 2 EC3L / EKAL 56 GbE 1 EC3A 2 EC3L / EKAL 100 GbE 0 - 1 EC3L / EKAM FDR 1 - 2 EL3D / EKAL EDR 0 - 1 EC3E / EKAL NOTE: To provide Active/Active redundant network, the PCI Slot must have enough bandwidth to pass the data from the CPU to the Network. IBM FC# EC3A is only a PCI Gen3.0 x8 Card so is limited to max bandwidth of 56Gb To achieve dual 40GbE Active/Active redundant network, the FC# EC3L or EKAL should be used with both ports connected @40GbE on a card with PCI Gen3.0 x16.

NOTE: Bonding: The most common mode is Mode 4 LACP/802.3ad, this has an overhead and is originally to bond low speed unreliable links. With the implementation of modern Ethernet networks and enhancements to Linux. Mode 5 – TLB and Mode 6 – ALB. Using Mode 5/6 are good choice as they have less overhead than Mode 4 and they require no configuration on the switches to provide Active / Active redundancy.

NOTE: When Mellanox is configured end to end, Adapter, Cable and Switch, there is a free upgrade to Mellanox supported 56GbE. This provides 40% more bandwidth than 40GbE. Activation is a single command on the required interface of “speed 56000” on the switch interface.

NOTE: To achieve redundant network for IB - FDR, FC#EC3E / EKAL @ 2x FDR To achieve redundant network for IB - EDR 2x FC#EC3E / EKAL @ EDR Redundancy is provided by Mode 1 Active / Standby, the bond is created the same as normal Linux bond Mellanox Infrastructure for 10 GbE Cluster HortonWorks Choice of Cabling 40GbE / FDR Cabling

Speed Switch Cabling Adapter Optics Length Description FC SX1410 – 8831-S48 0.5m 40GbE / FDR Copper Cable QSFP EB40 10/40 GbE EKAU 2@10/25 1m 40GbE / FDR Copper Cable QSFP EB41 See list on right 4610-54T – 8831-S52 2m 40GbE / FDR Copper Cable QSFP EB42 1GbE LOM 3m 40GbE / FDR Optical Cable QSFP EB4A 5m 40GbE / FDR Optical Cable QSFP EB4B 10m 40GbE / FDR Optical Cable QSFP EB4C 15m 40GbE / FDR Optical Cable QSFP EB4D 20m 40GbE / FDR Optical Cable QSFP EB4E Sample 96 Port L2 Cluster 30m 40GbE / FDR Optical Cable QSFP EB4F 48 HA 10GbE Hosts + ESS Storage 50m 40GbE / FDR Optical Cable QSFP EB4G

8831-S48 48x 10GbE + 12x 40GbE

ToR

Mode 6 - ALB Mode 6 - ALB T T

48x 10 GbE Endpoints per Leaf

Mode 6 - ALB

ESS T 10 GbE Client Mellanox Infrastructure for 10 GbE Cluster HortonWorks Choice of Cabling 40GbE / FDR Cabling

Speed Switch Cabling Adapter Optics Length Description FC SX1710 – 8831-NF2 EB27 + 0.5m 40GbE / FDR Copper Cable QSFP EB40 40 GbE EB2J or EB2K 1m 40GbE / FDR Copper Cable QSFP EB41 2m 40GbE / FDR Copper Cable QSFP EB42 SX1410 – 8831-S48 See list on right EB28 + 10/40 GbE EKAU 2@10/25 ECBD or 3m 40GbE / FDR Optical Cable QSFP EB4A ECBE 5m 40GbE / FDR Optical Cable QSFP EB4B 4610-54T – 8831-S52 1GbE LOM 10m 40GbE / FDR Optical Cable QSFP EB4C 15m 40GbE / FDR Optical Cable QSFP EB4D 20m 40GbE / FDR Optical Cable QSFP EB4E Sample 192 Port L2 (VMS) Cluster 30m 96 HA 10GbE Hosts 40GbE / FDR Optical Cable QSFP EB4F 50m 40GbE / FDR Optical Cable QSFP EB4G IPL 4x 56GbE 8831-NF2 36x 40GbE

Spine

6x 40 GbE Link per Spine 6 x 40 GbE Link per Spine

8831-S48 48x 10GbE + 12x 40GbE

Leaf

T 10 GbE Client

T T T T

48x 10 GbE Endpoints per Leaf Mellanox Infrastructure for 40GbE Cluster HortonWorks Choice of Cabling 40GbE / FDR Cabling

Speed Switch Cabling Adapter Optics* Length Description FC 0.5m 40GbE / FDR Copper Cable QSFP EB40 SX1710 – 8831-NF2 EB27 + 40 GbE See list on right EKAL 2@40 EB2J or 1m 40GbE / FDR Copper Cable QSFP EB41 EB2K 2m 40GbE / FDR Copper Cable QSFP EB42 3m 40GbE / FDR Optical Cable QSFP EB4A 5m 40GbE / FDR Optical Cable QSFP EB4B Sample 72 Port L2 (VMS) Cluster 36 HA 10/40GbE Ports 10m 40GbE / FDR Optical Cable QSFP EB4C 15m 40GbE / FDR Optical Cable QSFP EB4D 20m 40GbE / FDR Optical Cable QSFP EB4E 30m 40GbE / FDR Optical Cable QSFP EB4F 50m 40GbE / FDR Optical Cable QSFP EB4G IPL 4x 56GbE 8831-NF2 36x 40GbE

Spine

7x 56 GbE Link per Spine 7x 56 GbE Link per Spine

4 40 GbE Data Network

Leaf 4 40 GbE Client Q Q Q X X X

Q QSFP to SFP+ Adapter (QSA) T T 4 4 4 4 X * SFP+ DAC or Transceiver 18x 10/40 GbE Endpoints per Leaf T 10 GbE Endpoint

* 10GBE & Optics are IBM Parts only Mellanox Infrastructure for 40 GbE Cluster HortonWorks Choice of Cabling 40GbE / FDR Cabling

Speed Switch Cabling Adapter Optics Length Description FC SX1710 – 8831-NF2 EB27 + 0.5m 40GbE / FDR Copper Cable QSFP EB40 40 GbE EKAL 2@40GbE EB2J or 1m 40GbE / FDR Copper Cable QSFP EB41 See list on right EB2K 2m 40GbE / FDR Copper Cable QSFP EB42 4610-54T – 8831-S52 1 GbE LOM 3m 40GbE / FDR Optical Cable QSFP EB4A 5m 40GbE / FDR Optical Cable QSFP EB4B

Sample 108 Port L3 (VMS) Cluster 10m 40GbE / FDR Optical Cable QSFP EB4C 90 HA 40GbE + 15m 40GbE / FDR Optical Cable QSFP EB4D Dedicated Storage switches 20m 40GbE / FDR Optical Cable QSFP EB4E 30m 40GbE / FDR Optical Cable QSFP EB4F 50m 40GbE / FDR Optical Cable QSFP EB4G 36port 36port 36port 36port 36port 36port 40GbE 40GbE 40GbE 40GbE 40GbE 40GbE

6 Ports per Spine 6 Ports per Spine 6 Ports per Spine 6 Ports per Spine

Layer 3 OSPF/ECMP Network Mellanox VMS

36port 36port 36port 36port 36port 36port 36port 36port 36port 36port 36port 36port 40GbE 40GbE 40GbE 40GbE 40GbE 40GbE 40GbE 40GbE 40GbE 40GbE 40GbE 40GbE

18ports 18ports 18ports 18ports 18ports 18ports 18ports 18ports 18ports 18ports

Mode 6 - ALB

72x Bottom Port Dual Port Card 72x Top Port Dual Port Card 2x EC3L per NSD ESS ESS ESS ESS 4x ESS with 4x EC3L Cards @ 2x 40Gb

32x 40Gb Ports = ~ 112GB 2x 100Gb Cards x 2 Ports @ 40 = 160Gb per NSD 3x 40Gb Cards x 1 Port @ 40 = 120Gb per NSD

Mode 6 - ALB

Compute Node 1x EKAL @ 2x 40Gb per Node Mellanox Infrastructure for ESS/Spectrum Scale

GB Bandwidth per Port per Speed for Single NSD/IO Node

One Port Two Ports Three Ports Four Ports Five Ports Six Ports

EDR 8.5 17.0 25.5

1 0 0 G b E 8.0 16.0 24.0

E D R @ 2 x F D R 5.5 11.0 16.5 22.0 27.5 33.0

FDR 5.0 10.0 15.0 20.0 25.0 30.0

100GbE@2x 56GbE 4.48 8.96 13.44 17.92 22.4 26.88

5 6 G b E 4.48 8.96 13.44

100GbE@2x 40GbE 3.6 7.2 10.8 14.4 18.0 21.6

4 0 G b E 3.2 6.4 9.6

2 5 G b E 1.8 3.6 5.4 7.2 9.0 10.8

1 0 G b E 0.81.62.4 3.2 4.0 4.8

SINGLE NSD Port Bandwidth options Ports 10GbE 25GbE 40GbE 100GbE@2x 40GbE 56GbE 100GbE@2x 56GbE FDR EDR@2x FDR 100GbE EDR One Port 0.8 1.8 3.2 3.6 4.48 4.48 5.0 5.5 8.0 8.5 Two Ports 1.6 3.6 6.4 7.2 8.96 8.96 10.0 11.0 16.0 17.0 Three Ports 2.4 5.4 9.6 10.8 13.44 13.44 15.0 16.5 24.0 25.5 Four Ports 3.2 7.2 14.4 17.92 20.0 22.0 Five Ports 4.0 9.0 18.0 22.4 25.0 27.5 Six Ports 4.8 10.8 21.6 26.88 30.0 33.0 Mellanox Infrastructure for ESS/Spectrum Scale

40 Sequential throughput vs. Capacity for selected ESS models

35 GL6S = 34 GB/s

30

GL6 = 25 GB/s 25 GL4S = 23 GB/s

20 GL4 = 17 GB/s

15

GL2S = 11 GB/s 10 Max Sequential Throughput (GBytes/s) Throughput SequentialMax GL2 = 8 GB/s

5 Read, IOR, Infiniband+RDMA network, (ESS) filesystemblocksize 16MB network, Infiniband+RDMA IOR,Read,

- 10 100 1,000

TB Usable Capacity Approx max capacity using 8+2P (ESS), combined MD+Data pool. Note logarithmic scale. Dual NSD Port Bandwidth options Ports per NSD 10GbE 25GbE 40GbE 100GbE@ 2x 40GbE 56GbE 100GbE@ 2x 56GbE FDR EDR@2x FDR 100GbE EDR One Port 1.6 3.6 6.4 7.2 8.96 8.96 10.0 11.0 16.0 17.0 Two Ports 3.2 7.2 12.8 14.4 17.92 17.92 20.0 22.0 32.0 34.0 Three Ports 4.8 10.8 19.2 21.6 26.88 26.88 30.0 33.0 48.0 51.0 Four Ports 6.4 14.4 28.8 35.84 40.0 44.0 Five Ports 8.0 18. 36.0 44.8 50.0 55.0 Six Ports 9.6 21.6 43.2 52.76 60.0 66.0 IBM Support Contacts

Duane Dial – Director of Sales, IBM WW Matthew Sheard - Solutions Architect – IBM WW [email protected] [email protected] 512-574-4360 [email protected] Sametime [email protected] Jim Lonergan – Business Development IBM WW 919-360-1654 [email protected] [email protected] John Biebelhausen – Sr. OEM Marketing Sametime [email protected] [email protected] 512-897-8245 512-770-4991

Lyn Stockwell-White – North America Channels IBM [email protected] [email protected] Sametime [email protected] 602-999-5255

FOR INTERNAL USE ONLY – cannot be posted online or reproduced without Mellanox consent 28-Sep-17 v1 © 2017 Mellanox Technologies www.mellanox.com [email protected] +1 (512) 897-8245 34 OEM Microsite

www.mellanox.com/oem/ibm

FOR INTERNAL USE ONLY – cannot be posted online or reproduced without Mellanox consent 28-Sep-17 v1 © 2017 Mellanox Technologies www.mellanox.com [email protected] +1 (512) 897-8245 35 Mellanox Community

https://community.mellanox.com/community/solutions

FOR INTERNAL USE ONLY – cannot be posted online or reproduced without Mellanox consent 28-Sep-17 v1 © 2017 Mellanox Technologies www.mellanox.com [email protected] +1 (512) 897-8245 36 Mellanox Academy

http://academy.mellanox.com/en/

FOR INTERNAL USE ONLY – cannot be posted online or reproduced without Mellanox consent 28-Sep-17 v1 © 2017 Mellanox Technologies www.mellanox.com [email protected] +1 (512) 897-8245 37 Thank You