COMPANY NOTE

Initiating Coverage

USA | Technology | IT Hardware October 2, 2018 EQUITY RESEARCH (MLNX) BUY Price target $110.00 Underappreciated Durable Growth Story, Price $73.45^ Initiate with a Buy Rating Key Takeaway We're initiating coverage of Mellanox with a Buy rating and a $110.00 PT. We like the risk/reward as investor sentiment is too negative and the company is set up to benefit from Data Center investments in Interconnect. Financial Summary High-Performance Interconnect Player. Mellanox is a leading supplier of high- Book Value/Share: $21.83 performance Interconnect solutions for Data Center networks. Their products – sold as Net Debt (MM): ($290.6) Circuit Boards, Integrated Circuits, Cables, and Switches – are used to connect Servers, Storage, and Networking devices together in Data Centers. Market Data AMERICAS Latency “Whack-a-Mole” Now Pointing to Mellanox. We see a confluence of industry 52 Week Range: $90.45 - $42.25 and technology trends driving Data Center operators to invest more aggressively in the Total Entprs. Value: $3.6B Interconnect portion of their networks. As our report shows, latency is a critical variable Market Cap.: $3.9B driving the user experience as well as the ROI on Data Center assets. Driven by major trends Insider Ownership: 18.6% such as Public Cloud, Artificial Intelligence, and Big Data, Data Center operators shift their Institutional Ownership: 59.2% capital investments to alleviate network latency (hence, the whack-a-mole concept). From a technology perspective, we see significant changes in Compute, Storage, and Networking Shares Out. (MM): 53.2 technology that are shifting network bottlenecks onto Interconnect infrastructure –great for Float (MM): 48.2 Mellanox. Avg. Daily Vol.: 436,253 Durable Growth Story. We expect these industry trends to drive a multi-year growth story at Mellanox. Not only should the business grow with rising investment in Data Center assets, they’re exposed to the faster-growing parts of the technology evolution (25G+ NICs, Smart NICs, and White Box Switching). Activist Involvement Helps. We like the activist involvement in the stock as it certainly increases management’s focus on shareholder value. We’d like to see the company get more shareholder-friendly in general. The pending addition of a CFO and any improvement in the Investor Relations effort could be helpful in this regard. Above-Consensus Estimates. We’re modeling for 2019 and 2020 sales of $1.263 billion (+17% Y/Y) and $1.439 billion (+14% Y/Y), respectively. Street estimates call for $1.197 billion George C. Notter * in 2019 sales (there is no consensus 2020 figure). For EPS, we’re modeling for $6.40 and Equity Analyst $7.50 in 2019 and 2020 non-GAAP EPS, respectively (Street = $5.65 and $6.82, respectively). (415) 229-1522 [email protected] Kyle McNealy * Equity Associate (415) 229-1528 [email protected] Steven Sarver * Equity Associate (415) 229-1520 [email protected] * Jefferies LLC / Jefferies Research Services, LLC

USD Prev. 2017A Prev. 2018E Prev. 2019E Prev. 2020E Price Performance Rev. (MM) -- 863.9 -- 1,078.4 -- 1,263.4 -- 1,439.0

EV/Rev 4.2x 3.4x 2.9x 2.5x 90 Cons. EPS -- 2.28 -- 4.64 -- 5.65 -- 6.82 80 EPS Mar -- 0.29 -- 0.98A -- 1.28 -- 1.84 70 Jun -- 0.44 -- 1.25A -- 1.49 -- 1.85

Sep -- 0.71 -- 1.23 -- 1.69 -- 1.88 60 Dec -- 0.82 -- 1.27 -- 1.93 -- 1.93 50 FY Dec -- 2.27 -- 4.73 -- 6.40 -- 7.50 FY P/E 32.4x 15.5x 11.5x 9.8x 40 OCT-17 FEB-18 JUN-18 OCT-18 ^Prior trading day's closing price unless otherwise noted.

Please see analyst certifications, important disclosure information, and information regarding the status of non-US analysts on pages 50 to 54 of this report. MLNX

Initiating Coverage Mellanox (MLNX): Underappreciated Durable Growth Story, Initiate with a Buy Rating October 2, 2018 Buy: $110.00 Price Target

VIEW LONG THE Scenarios Investment Thesis / Where We Differ Base Case . We believe investors underestimate the strength and durability of . MLNX maintains high share of 25/50/100G NIC Ethernet the multi-year growth cycle for server interconnect (NICs) going market; market grows at a 45% 5-yr CAGR for 2017-2022 on right now – driven by Public Cloud and Enterprise Data Center customers. . HPC segment grows low single-digits in CY’18-19, driven by 200G product cycle partially offset by Omni-Path . We see investment in NICs/Server Uplinks as the most effective competition way for Cloud & Data Center customers to unlock additional performance in their infrastructure driving significant demand for . Scale benefits offset some gross margin pressure as the Mellanox products. product mix shifts to Ethernet

. Mellanox has a first mover advantage and strong product lead for . Ethernet Switching products see modest attach rates with 25G/50G/100G Ethernet NICs. Further, AI, Machine Learning, Big end-to-end Ethernet deals for Mellanox Data, and faster Storage are expanding the markets and use cases . CY’20 EPS: $7.50; Target P/E Multiple: 14.7x; Price Target: for these products. $110 Catalysts Upside Scenario . MLNX gains even slightly higher share of 25/50/100G NIC . Ramping 25/50/100G Ethernet demand (positive) Ethernet market; market grows at a 50%+ 5-yr CAGR for . Hyperscale Ethernet Switch wins (positive) 2017-2022 . Launch of Omni-Path 2 in 2019 (negative) . CY’18-19 growth faster than expected driven by HPC wins, . Any competitive product announcements from Broadcom Public Cloud and Web 2.0 adoption of Ethernet, Storage or Marvell/Cavium for Ethernet NICs (negative) and 200G, Ethernet Switching . fails to gain additional traction in High Performance Interconnect with Omni-Path . Ethernet switching products see increasing attach rates with end-to-end Ethernet deals for Mellanox . CY’20 EPS: $8.25; Target Multiple: 18.2x; Price Target: $150 Long Term Analysis

Downside Scenario Long Term Financial Model Drivers . MLNX loses share of 25/50/100G NIC Ethernet market as LT Earnings CAGR 25% competitive products come to market; market grows at a Organic Revenue Growth 20%+ 25% 5-yr CAGR for 2017-2022 Acquisition Contribution Neutral Operating Margin Expansion Significant . Intel Omni-Path and Omni-Path 2 integrated interconnect drives more share loss and pricing pressure than anticipated . HPC market slows significantly . Ethernet Switching Opportunities fail to materialize . CY’20 EPS: $6.00; Target Multiple: 10.8x; Price Target: $65

page 2 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018 Underappreciated Durable Growth Story, Initiate with a Buy Rating Initiating Research Coverage with a Buy Rating and a $110 Price Target. Mellanox, based in Yokneam, Israel, and Sunnyvale, CA, is a leading supplier of high- performance Interconnect solutions for Data Center networks. Their products – sold as Circuit Boards, Integrated Circuits, Cables, and Switches – are used to connect Servers, Storage, and Networking devices together in Data Centers. The company is the market- share leader in the InfiniBand space with ~85% market share. With a new product initiative that began in 2013, Mellanox has emerged as a leader in Ethernet Network Interface Cards (NIC) products as well. It is now the number 1 provider of Ethernet NICs at 25G speeds and higher. Looking forward, we believe that a confluence of factors will drive improved performance from the business. We’re initiating research coverage of the shares with a $110 Price Target and a Buy rating. Key elements of our thesis include:

Latency “Whack-a-Mole” Now Pointing to Mellanox. We see a confluence of industry and technology trends driving Data Center operators to invest more aggressively in the Interconnect portion of their networks. As our report shows, latency is a critical variable driving the user experience as well as the ROI on Data Center assets. Driven by major trends such as Public Cloud, Artificial Intelligence, and Big Data, Data Center operators shift their capital investments to alleviate network latency (hence, the whack-a- mole concept). From a technology “supply-side” perspective, we see significant changes in Compute, Storage, and Networking technology that are shifting network bottlenecks onto Interconnect infrastructure – great for Mellanox. In Compute, these trends include the dramatic growth in GPUs (i.e. ) as well as the pending shift toward higher- speed Server bus standards (PCIe 4.0). As greater amounts of data flow between the CPU and memory, interconnect (NICs) have to get faster so more data can flow into the Server itself. In Storage, the growing adoption of Solid State Drive / Flash storage – as opposed to spinning media on Hard Disk Drives (HDD) – is helpful. By solving for the latency issues inherent in Storage, SSD helps to move the traffic bottleneck onto other areas of the Data Center. In Networking, higher capacity Switching fabrics and 400G optics also create a corresponding need for faster Interconnect. We believe this confluence of industry and technology trends will drive the transition from 10G to 25G+ Server Interconnects (NICs) – especially for solutions that support CPU offload capabilities. As the market leader in InfiniBand and 25G+ Ethernet NICs, Mellanox is very well-positioned to benefit.

Durable Growth Story. We expect these industry trends to drive a multi-year growth story at Mellanox. Not only should the business grow with rising investment in Data Center assets, they’re exposed to the faster-growing parts of the technology evolution (25G+ Ethernet NICs, Smart NICs, and potentially White Box Switching).

We Don’t See Intel’s Omni-Path as a Significant Competitive Risk. Intel’s Omni- Path does run the InfiniBand protocol stack which is “skinnier” than the Ethernet stack and provides low latency. That said, they aren’t removing processing from Host Server CPUs. Naturally, Intel doesn’t want to offload processing from Host CPUs – they’ll diminish the number of CPUs they can sell into Data Centers. We understand that Intel will sometimes try to compete on price with Mellanox’s InfiniBand. That’s still a difficult proposition. Any customer that’s looking holistically at the total network costs will realize that Mellanox’s InfiniBand is a much better solution – all else equal, as it allows them to buy fewer Server/CPUs to process an equivalent workload. On the Ethernet side of the NIC business, Mellanox holds the dominant market share for 25G and above NICs that provide CPU offload capability. Naturally, Intel doesn’t compete in this business at all (it

page 3 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

does make traditional connectivity-only NICs which, of course, don’t offload processing from CPUs). Activist Involvement Helps. We like the activist involvement in the stock as we think it’s a good motivator – it certainly increases management’s focus on shareholder value. We’d like to see the company get more shareholder-friendly in general. In our view, there’s a wide gap between business fundamentals and the valuations that investors are currently willing to pay for the stock. The pending addition of a CFO, any improvement in the Investor Relations effort, and a better commitment to educate investors could be helpful in this regard. Above-Consensus Estimates. We’re modeling for the business to generate 2019 and 2020 sales of $1.263 billion (+17% Y/Y) and $1.439 billion (+14% Y/Y), respectively. Street estimates call for $1.197 billion in 2019 sales (there is no consensus figure for 2020). On the bottom line, we’re modeling for $6.40 and $7.50 in 2019 and 2020 non- GAAP EPS, respectively (Street = $5.65 and $6.82, respectively). Given the industry drivers we’ve outlined above, we’re more bullish on the growth rate (and the durability of growth) in their Ethernet NIC business.

Attractive Valuation. The business currently trades for 9.8x our 2020 non-GAAP EPS projection (10.8x Street consensus). Our still-conservative $110 Price Target works out to 14.7x our 2020 EPS estimate. In our view, parity to the company’s historical forward PE multiple (14.7x) – at a minimum – makes sense given the attractive revenue and EPS growth rates inherent in the business. Moreover, we expect that investors’ willingness to pay higher multiples will improve as the business becomes more predictable, concerns about Intel Omni-Path competition fade, and the company drives significant cash flow. Risks Risks include: 1) volatility or delays in Intel’s Server CPU product cycles; 2) headline risk associated with potential competitive product announcements; 3) Open Compute Project efforts in Ethernet NICs; and 4) potential volatility in capital spending among Internet Content Provider customers. Valuation Mellanox currently trades at 9.8x our 2020 EPS estimate, below its 5-year forward-year average P/E of 14.7x. This is also a significant discount with the comp group peer average of 14.9x (see Chart 1 below). We believe the shares should trade – at a minimum – at parity to the comp group as justified by Mellanox’s superior growth prospects offset by higher-than-peer-group risk. We expect the company to grow sales (and EPS) at a 20- 25% annual rate over the intermediate term. Our $110 Price Target works out to 14.7x our 2020 EPS estimate.

page 4 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

Chart 1: Comparable Company Valuation

Ticker Rating Price Market Shares Revenue Op Margin EPS Revenue EPS B/S Net Cash P/E 2019E EV/Rev Company (B/H/U) 10/1/18 Cap Outs. 2018E 2019E 2018E 2019E 2018E 2019E Growth Growth Debt Per Share Excluding 2019E Stock Comp Marvell Tech Group MRVL B $19.27 $10,833 562.1 $2,975 $3,570 27.9% 31.4% $1.27 $1.53 13.0% 13.0% $1,955 ($2.55) 12.6x 3.4x Broadcom AVGO B $249.51 $110,034 441.0 $20,821 $21,469 49.5% 50.0% $20.55 $21.76 13.4% 13.4% $17,604 ($30.54) 11.5x 5.8x Intel Corp INTC U $46.45 $220,498 4,747.0 $69,541 $71,497 32.2% 31.3% $4.14 $4.23 9.6% 9.6% $28,796 ($0.78) 11.0x 3.3x Silicom SILC NC $40.62 $310 7.6 $121 $150 14.1% 16.6% $2.00 $2.70 20.0% 30.0% $0 $6.89 12.5x 1.8x Arista Netw orks ANET H $259.64 $20,986 80.8 $2,123 $2,598 36.0% 34.3% $7.38 $8.11 25.0% 30.0% $0 $23.02 29.2x 7.4x Juniper Netw orks JNPR H $29.92 $10,472 350.0 $4,705 $4,803 17.2% 18.4% $1.72 $1.97 5.0% 7.5% $2,138 $3.98 13.2x 1.9x CSCO B $48.87 $236,726 4,844.0 $50,412 $51,690 31.6% 31.2% $2.69 $2.96 3.0% 5.0% $28,072 $5.44 14.7x 4.1x Communications Equipment Industry Average 30% 30% 13% 16% 14.9x 4.0x

Mellanox (MLNX) MLNX B $73.45 $3,909 53.2 $1,078 $1,263 23.8% 28.6% $4.73 $6.40 15.0% 20.0% $0 $5.46 10.6x 2.9x

** All EPS estimates and operating margins include stock compensation expense

* All estimates are based on calendar years. ** All EPS estimates and operating margins include stock compensation expense B/H/U represents Buy, Hold, and Underperform ratings. Note: All valuation metrics are calculated ex-cash and associated interest income except NC firms.

Source: Jefferies Research, Factset

Chart 2: Mellanox Forward PE Ratio (Oct 2013 – Oct 2018) 25x

20x

15x

10x Forward P/E Forward

Forward P/E: 11.9x 5x 5-Yr Avg: 14.4x

0x

P/E FY'14 P/E FY'15 P/E FY'16 P/E FY'17 P/E FY'18 P/E FY'19 5-Yr Average

Source: Factset, Jefferies Research

page 5 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

Chart 3: Mellanox 3-Year Price History

Source: Factset

Industry Background: An Investment Shift to Interconnect Mellanox is a provider of end-to-end “Interconnect” solutions. Simply put, Interconnect products allow the transfer of data between various network, compute, and storage hardware elements such as servers, switches (an Interconnect product itself), and storage arrays. Interconnection between network and compute resources uses communication protocols such as Ethernet, InfiniBand (where Mellanox is dominant), , and the emerging Fibre Channel over Ethernet (FCoE) protocol. Before we dive into the Industry background, it makes sense to visualize the company’s products and where they get deployed in Data Center environments. As Chart 4 shows, the organization’s products get deployed as “boards” or Network Interface Cards (also called NICs or Adapters). These products, called the ConnectX product family, provide connectivity into Servers and Storage equipment and account for roughly 50% of the revenue stream. Separately, its Spectrum Switch and SwitchIB are Top-of-Rack Switches. Switch Systems account for roughly 20% of company sales. The remainder includes the LinkX family of cables and transceivers (roughly 20% of revenue).

page 6 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

Chart 4: Mellanox Products in Hyperscale Data Centers Spine Hyperscale Data Center (More Ethernet, Some Infiniband) Top-of-Rack Servers & Storage

Mellanox Ethernet ToR Sw itches Mellanox Infiniband Sw itches

Mellanox Ethernet / Infiniband Mellanox Ethernet / Infiniband Adapter Cards Cables & Transceivers

Source: Jefferies Research, Mellanox Company Data

As an Interconnect specialist, Mellanox is exposed to a number of interesting and powerful IT trends. Historically, the company’s business has been driven by the high- performance computing (HPC) market. As such, Mellanox built its franchise on InfiniBand technology – i.e. chips, boards and Network Interface Cards (NICs) to provide high performance Interconnect from a processor or server to another processor or server. The organization had a significant advantage in that market as its solutions allowed much lower latency with InfiniBand.

Going forward, newer trends driving the business will include , Big Data, and Artificial Intelligence. Further, crucial technology enablers include the development of dramatically faster and more efficient Storage, Networking, and Compute infrastructure. Once again, the ability to construct networks with much lower latency provides a key advantage for Mellanox. Chart 5 illustrates the major drivers of the

page 7 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

company’s business. Our Industry Background discussion highlights each of these major technology shifts and industry trends below.

Chart 5: Mellanox is at the Intersection of Major Industry Trends & Technology Developments

High Performance Cloud / Web 2.0 Big Data Compute (HPC) A Plethora of New Business Drivers

Massive Scale / Clustered Networks

15+ years of Increasing Need for Low Latency InfiniBand Growth in development Deployment of 25/50/100G NICs Mellanox Interconnect PCIe 4.0, GPUs, products 100/200/400G, Parallel 25G/Lane SERDES Processing, COMPUTE NVME, RDMA NETWORK

Enabling GPUs (Nvidia) drive Networking/ Technology 10-100 fold STORAGE Optics Development in improvements in Speeds Grow the Data Center processing speed Storage 10-100X Access Gets 10-100x Faster

Source: Jefferies Equity Research

It’s All About Latency Latency matters. In a mundane example, we know that users’ online experiences are directly related to latency. Facebook once noted that – at two seconds of latency, there is a 40% session abandonment rate from its users. Of course, users simply have other (i.e. more responsive) ways to spend time on their smartphones. Given the size and scale of Facebook’s business, latency becomes a massive consideration in the construction of their Data Center networks. Latency translates into fewer eyeballs and more lost revenue – lots of it for an infrastructure like Facebook that serves 2.2 billion users.

Beyond these types of user experience examples, the emergence of Artificial Intelligence and Big Data also drives the need for lower latency. GPUs with massive parallel processing power are driving the ability to process and analyze larger and larger data sets. With data sets growing at a tremendous pace, system responsiveness can become impaired. Query times stretch out and rendering times lag. For example, a fraud detection application in the financial sector requires low latency. Mellanox cites an example where a financial institution might be analyzing massive amounts of information to perform fraud detection in real time. Their systems have 300 milliseconds (i.e. 300ms, or 0.3 seconds) to process that information. The amount of data they can process in that time can make the difference between a 95% and 99% efficacy score – that difference can mean hundreds of millions of dollars for the financial institution.

page 8 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

Similarly, the ability to move data faster (at lower latency) can yield significant improvements in efficiency. If workloads can run faster, the operator can process more workloads and drive up the ROI on its Data Center assets. Hence, speed and low latency are directly related to Data Center economics.

Below, we expand upon our industry framework (Chart 5) with a discussion of the key big-picture drivers pushing investments in lower-latency Data Center infrastructure (including Mellanox). These include: High Performance Computing, Public Cloud, and Big Data. While these trends are self-evident, we’ve outlined the more salient aspects as they relate to Mellanox’s long term opportunity. High Performance Computing More Performance & It’s Not Just for Rocket Scientists. Much of Mellanox’s traditional business is rooted in the High-Performance Computing (HPC) space. A “supercomputer” is loosely defined as a computer or set of computers working together with the ability to complete a large set of computations in a short period of time. They’re increasingly comprised of systems of commodity servers working together in computing “clusters”. As shown in Chart 7, the computational power of Supercomputers has Chart 6: Supercomputers Speed increased at a rapid clip over the last two decades. Performance is measured in “FLOPS”, Terminology: FLOPs or FLoating Point OPerations per Second”. Floating point operations are complex T e r m N u m b e r o f Description mathematical operations that can manipulate a wide range of numerical values from the FL O P s very large to the very small and with great precision (using decimal points). The world’s 21 Z e tta F L O P s 10 sextillions fastest supercomputer is the Summit system at Oak Ridge National Laboratory. It boasts 18 E xa F L O P s 10 quintillions performance of over 120 PetaFLOPS. Over the past 10 years, super computer maximum 15 P e ta F L O P s 10 quadrillions calculation speed has nearly doubled every year, mostly due to an increasing number of 12 T e ra F L O P s 10 trillio n s processor cores but to a lesser degree an increase in FLOPS/core. Our chart is based on 9 G ig a F L O P s 10 b illio n s the “Top 500” Supercomputer list published twice a year. 6 M e g a F L O P s 10 m illio n s 3 K ilo F L O P s 10 th o u s a n d s Chart 7: Top 500 Supercomputers: Progression of Processing Power in Source: Jefferies Research Floating Point Operations per Second (“Flop/s”)

1.2 Eflop/s

122.3 Pflop/s

715.6 Tflop/s Sum

1.1 Tflop/s N = 1

59.7 Gflop/s

N = 500

0.4 Gflop/s

Source: Top500.org

page 9 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

High Performance Computing equipment has been critically important in the most computationally intensive industries such as Physics, Geology, Molecular Engineering, Oil & Gas Exploration, Movie/Special Effect Rendering, Cryptography, Weather Simulation, Bioscience, and Computer-Aided Engineering. As shown in Chart 8, more “traditional” enterprises are now increasingly looking to HPC to generate insights that they can apply to their business.

Chart 8: HPC Spending by Industry Vertical Other Weather 1% Bio-Sciences 4% 10%

University/Academic 18% CAE 11%

Chemical Engineering 2%

DCC & Distribution Government, 6% Academic, and Defense comprise Government Lab nearly 50% of the 18% HPC Market. Economics/Financial 5%

Defense Geosciences EDA / IT / ISV 10% 7% 7%

Mechanical Design 1% Source: IDC (August 2016)

As shown in Chart 9 below, the HPC market is expected to grow at a reasonably good clip over the next several years (13.4% CAGR for Multinode servers). We note that in 2017, memory supply constraints and cost increases impacted server spending and contributed to some of the growth in 2018 and beyond.

page 10 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

Chart 9: Multinode Servers End User Spending ($US Billions, Constant Currency) $20 ’17-’22 CAGR (Spending) $17.6 $17.6 $18 Multinode Servers: 13.4% CAGR $16.1 $16 $15.1 $15.1

$14

$12

$10 $9.4 $7.9 $8 $6.8 $6

$4

$2

$0 2015 2016 2017 2018E 2019E 2020E 2021E 2022E

Source: Gartner Worldwide Servers Forecast, 2015-2022E, Q2’18 Update* *All statements in this report attributable to Gartner represent Jefferies interpretation of data, research opinion or viewpoints published as part of a syndicated subscription service by Gartner, Inc., and have not been reviewed by Gartner. Each Gartner publication speaks as of its original publication date (and not as of the date of this report). The opinions expressed in Gartner publications are not representations of fact and are subject to change without notice. Cloud/Web 2.0 More Workloads = More Interconnect. As shown in Chart 10, Cisco forecasts enterprise workloads will increasingly migrate to Cloud environments. Further, Cisco is forecasting cloud-based workloads to grow at a 22% CAGR over 2016-2021 vs. -5% for traditional Data Center workloads. We believe this transition to the Cloud favors Mellanox. The company’s Interconnect solutions are well-suited to environments where servers and storage are deployed with very high density.

Chart 10: Percent of Workloads in Cloud Data Centers 100%

90% 93% 94% 89% 91% 80% 86% 83% 75% 70%

60% 64% 53% 50%

40% 39% 30%

20%

10%

0% 2012 2013 2014 2015 2016 2017 2018E 2019E 2020E 2021E

Source: Cisco Global Cloud Index (Feb 2018)

page 11 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

Of course, the Internet Content Providers have businesses that are growing 50%+/year – a very favorable backdrop for continued investment in their Data Center infrastructures. As Chart 11 illustrates, their capital investments are growing at an 80% annual rate currently. These customers are heavily represented in Mellanox’s customer set.

Chart 11: North American Web 2.0 and Cloud Capex Spending (2014 – 2018E) Internet Content Provider Spending Analysis ($US Millions) Q1'15 Q2'15 Q3'15 Q4'15 Q1'16 Q2'16 Q3'16 Q4'16 Q1'17 Q2'17 Q3'17 Q4'17 Q1'18 Q2'18 2014 2015 2016 2017 2018E Apple $2,369 $2,043 $3,618 $3,612 $2,336 $2,809 $3,950 $3,334 $2,975 $2,277 $3,865 $2,810 $3,380 $4,082 $9,303 $11,642 $12,429 $11,927 $16,000 $2,927 $2,515 $2,383 $2,102 $2,444 $2,123 $2,554 $3,078 $2,508 $2,831 $3,538 $4,307 $7,299 $5,477 $10,959 $9,927 $10,199 $13,184 $15,394 Google $2,678 $2,060 $2,340 $1,787 $2,039 $2,056 $2,434 $2,888 $2,406 $2,835 $3,559 $3,805 $7,669 $5,299 Other Bets $157 $232 $271 $193 $277 $280 $324 $504 $170 $151 $77 $109 $55 $10 Reconciling Items $92 $223 -$228 $122 $128 -$213 -$204 -$314 -$68 -$155 -$98 $393 -$425 $168

Amazon $871 $1,213 $1,195 $1,309 $1,179 $1,711 $1,841 $2,005 $1,861 $2,501 $2,659 $3,036 $2,148 $3,243 $4,892 $4,588 $6,736 $10,057 $12,491

Facebook $502 $549 $780 $692 $1,130 $995 $1,095 $1,269 $1,271 $1,444 $1,755 $2,262 $2,812 $3,459 $1,831 $2,523 $4,489 $6,732 $15,000

Microsoft $1,391 $1,781 $1,356 $2,024 $2,308 $2,655 $2,163 $1,988 $1,695 $2,283 $2,132 $2,586 $2,934 $3,980 $5,294 $6,552 $9,114 $8,696 $10,380

Total $8,060 $8,101 $9,332 $9,739 $9,397 $10,293 $11,603 $11,674 $10,310 $11,336 $13,949 $15,001 $18,573 $20,241 $32,279 $35,232 $42,967 $50,596 $69,265 Y/Y 27% 0% -1% -2% 17% 27% 24% 20% 10% 10% 20% 28% 80% 79% 36% 9% 22% 18% 37% Q/Q -19% 1% 15% 4% -4% 10% 13% 1% -12% 10% 23% 8% 24% 9%

Total Ex-Apple $5,691 $6,058 $5,714 $6,127 $7,061 $7,484 $7,653 $8,340 $7,335 $9,059 $10,084 $12,191 $15,193 $16,159 $22,976 $23,590 $30,538 $38,669 $53,265 Y/Y 14% 6% 3% -9% 24% 24% 34% 36% 4% 21% 32% 46% 107% 78% 40% 3% 29% 27% 38% Q/Q -15% 6% -6% 7% 15% 6% 2% 9% -12% 24% 11% 21% 25% 6%

Web 2.0 and Cloud capex growth is currently running at ~80% in the US

Source: Company Data, Jefferies Research Big Data Big Data for Everyone. Expanding upon our industry framework in Chart 5, we believe the Big Data trend will create incremental opportunity for Mellanox – both on the Compute and the Storage side. This trend is driving the need (and ability) for Enterprises to process information more quickly while accessing large data sets – that drives demand for high performance compute and storage resources. We define the Big Data category to include companies and technologies that are involved with new database, storage and software frameworks that help enable related capabilities such as predictive analytics, “eventing”, and pattern detection. These tools are broadly aimed at facilitating proactive and data-driven business decision-making. Big Data analytics goes beyond traditional Business Intelligence (BI) in its ability to rapidly process large volumes and new forms of unstructured data. Further, Big Data allows predictive and real time—rather than after the fact—understanding and use of the data. A key driver for Mellanox, Big Data will gain commercial adoption because of the lower cost frameworks now available to process, store and analyze these data sets.

Big Data = Broader and More Complex Data Sets. The Big Data evolution involves capturing and processing large scale sets of both internally- and externally-produced structured and unstructured forms of data. Only about 5% of data produced today is “structured” data (i.e. data that’s easily placed into a database because it’s sorted into defined fields and relationships are defined fields). The 95% of data that is “unstructured” or “semi-structured” needs to be stored and processed in new ways that are not cost prohibitive. Moreover, traditional relational databases were not designed to process this type of unstructured data. Organizations will continue to press for solutions that,

page 12 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

according to the CIO of Wal-Mart, “flow data better, manage data better, analyze data better.” New software frameworks are emerging that make the collection and analysis of data possible at substantially lower costs, which ultimately will lead to broad enterprise adoption.

Big Data Analytics Are Getting Cheaper. Apache Hadoop and Open Source Chart 12: Hadoop in the Cloud database management system software has brought down the costs of deploying large Offers Substantial Cost infrastructures to support data analysis. A 2018 Gartner cost study showed the 3-year Advantage total cost of EnterpriseDB/Postgres Enterprise was $126K versus Oracle Database ($, total cost per TB) Enterprise Edition at $1.066 million (Source: Gartner: State of the Open-Source DBMS Market, 2018). While there are challenges with deploying Open Source software, the Hadoop On Prem ise $ 2 1 2 ,1 0 0 lower cost makes Big Data analytics more attainable to organizations that previously Azure HDInsight couldn’t afford it. $ 7 8 ,1 0 0 Cloud Service Apache Hadoop has brought similar cost savings for analytics storage infrastructure Source: Jefferies, company data particularly when deployed as a Cloud service. A Microsoft-sponsored study by IDC showed the total 5-year cost per terabyte for an on-premise Hadoop implementation was $212K. For a similar service with Hadoop in the cloud using the Microsoft Azure HDInsights service, the 5-year total cost per terabyte was $78K. These costs include the infrastructure hardware as well as IT staff time for support, management, and Chart 13: How big is a deployment. We think the proliferation, commercialization, and further maturity of Big Zettabyte? Data analytics infrastructure and the associated cost reductions will significantly broaden 1024 Megabytes = 1 Gigabyte (GB) the enterprise customer base deploying these solutions. 1024 Gigabytes = 1 Terabyte(TB) Data Growth Is Exploding. In April 2017, IDC raised their prediction for the amount of 1024 Terabytes = 1 Petabyte (PB) 1024 Pegabytes = 1 Exabyte (EB) data that would be on the planet by 2025 to 160 Zettabytes; this reflects a massive increase in stored data (5x from 2018 to 2025). Chart 14 illustrates. 1024 Megabytes = 1 Zettabyte (ZB) Source: Jefferies Chart 14: Global Data Stored Expected to Increase 5x Between 2018 and 2025

Source: IDC’s Data Age 2025 study sponsored by Seagate, April 2017

page 13 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

Key Technology Trends Are Driving the Interconnect Market… Per our framework in Chart 5, we’ve outlined a number of key technology trends that are driving Mellanox’s Interconnect business. It’s a unique set of technology breakthroughs in the areas of Storage, Compute, and Networking. Together, they’re driving new demand for Interconnect infrastructure. They include the evolution of higher speed, lower latency, and cheaper Storage infrastructure. Similarly, Networking infrastructure is evolving to increased speeds while the development of GPUs drives capacity and efficiency on the Compute side. Below, we expand on each of these major themes in Data Center environments.

Storage: Solid State/Flash = Faster Access + It’s Cheaper! Storage is an increasingly important application for Mellanox; the company said in the past that Storage applications are the second-largest driver of total company revenue behind HPC. More broadly, the technology advancements in Storage are driving the need for faster, lower latency Interconnect solutions.

Growth in Solid State/Flash. The growing adoption of Solid State Drive / Flash storage – as opposed to spinning media on Hard Disk Drives (HDD) – is a positive for Mellanox. Specifically, SSDs offer dramatically lower latency characteristics. By solving for the latency issues inherent in Storage, SSDs help to move the “latency bottleneck” off of Storage and onto other areas of Data Center infrastructure – including Interconnect solutions where Mellanox plays. We discuss the concept of a moving latency bottleneck a bit later in this report. On a related note, we expect InfiniBand/RDMA to have a disproportionate share of interconnect for SSD applications. That’s because the latency benefits of InfiniBand/RDMA become more apparent/impactful with SSDs. Total system latency is reduced to a point where RDMA can “move the needle.” This is illustrated in Chart 15 which compares the latency characteristics of HDD, SSD, and SSD with RDMA/NVMe (InfiniBand and RDMA are covered in more detail later in this report).

Chart 15: The Latency Benefits of RDMA are More Apparent in SSD Environments (µs = microsecond = 1/1,000,000 of a second)

Network Software Disk Total: 5,300 µs With HDDs (100 µs) (200 µs) (5,000 µs)

Netw ork Softw are Disk Total: 325 µs With SDDs (100 µs) (200 µs) (25 µs)

N S D Total: 46 µs With RDMA / NVMe

Netw ork Softw are Disk Going from traditional spinning media (HDD) to SSD can (1 µs) (20 µs) (25 µs) reduce total system latency by an order of magnitude. This makes the potential benefits of RDMA / NVMe more meaningful as the latency reduction becomes much more noticeable relative to total system latency.

Source: Jefferies Research, Mellanox Company Data

Gartner forecasts that SSD storage will grow very robustly in the coming years (Chart 16). The total storage market is expected to grow at an 11% revenue CAGR for 2017- 2022 (HHD and SSD) with a disproportionate share of the growth coming from SSD (13.2% CAGR). On an Exabyte basis, the market is expected to grow at 35% (HHDs and SSDs) with SSDs growing at a 52.3% CAGR.

page 14 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

Chart 16: Enterprise Grade Storage – Revenue by Drive Chart 17: Enterprise Grade Storage – Exabytes Shipped by Type ($, in billions) Drive Type (Exabytes) $40 1,400 ’17-’22 CAGR (Revenue) $35 Enterprise HDD: 8.2% CAGR 1,200 Enterprise SDD: 13.2% CAGR $30 Total: 11.2% CAGR 1,000 ’17-’22 CAGR (Exabytes) Enterprise HDD: 32.3% CAGR $25 Enterprise SDD: 52.3% CAGR 800 Total: 35.0% CAGR $20 600 $15 400 $10

$5 200

$0 0 2015 2016 2017 2018E 2019E 2020E 2021E 2022E 2015 2016 2017 2018E 2019E 2020E 2021E 2022E

Total Enterprise HDD Revenue Total Enterprise SSD Revenue Total Enterprise HDD EB Total Enterprise SSD EB

Source: Gartner Forecast: Hard-Disk Drives, Worldwide, 2015- Source: Gartner Forecast: Hard-Disk Drives, Worldwide, 2015- 2022 (July 2018) 2022 (July 2018)

At the system level, shipments of enterprise-grade SSDs used as storage network accelerators (and as storage for “mission critical” applications) are expected to grow from 25.5 million units in 2017 to 47.6 million in 2022, a 13% CAGR. Similarly, Gartner projects $11.0 billion in Enterprise All-SSD Storage Arrays sales by 2022 – quite meaningful versus $17 billion in total external enterprise storage array sales they expect in 2022. (Source: Gartner Forecast: External Storage Systems, Worldwide, All Countries, 2015- 2022, July 2018)

Faster Storage Protocols. Non-Volatile Memory Express (NVMe) is a new storage protocol designed to create faster and more reliable access to storage. The standard was developed by the NVM Express Working Group with version 1.3 published in May 2017. As stated above, we think this adoption of the new protocol will be a big catalyst for Mellanox because it takes a historically slow part of the Data Center environment (Storage) and speeds it up. Faster data access feeds more bits to CPUs for calculation and the whole system gets faster. NVMe is an optimized, high-performance, and scalable interface that unlocks the full capabilities of new Solid State Storage devices. Existing Storage protocols like Fibre Channel and iSCSI were built years ago and designed around Hard Disk Drives (HDDs) with spinning disks. They were inherently serial technology (i.e. the processed bits one at a time). NVMe is designed to leverage the full benefit of the latency and parallelism that Solid State Drives provide. With the most advanced hardware and latest NVMe protocols, storage access latency on SSDs is brought from 200+ of microseconds down to 10 microseconds. The input/output operations per second (IOPS), a measure of storage performance, can be dramatically higher too. Single drive IOPS go from a few hundred thousand with traditional storage protocols to over 2 million with NVMe.

The working group published another important specification in June 2016 called NVMe Over Fabrics which enables NVMe to ride over Fibre Channel, Ethernet, and InfiniBand. We think Ethernet’s inclusion in this is an important development because the protocol has historically been viewed as not reliable enough to carry storage traffic. Mission critical storage requires a lossless or near lossless protocol to maintain data integrity with read/write operations. We see this as an indication that Ethernet is becoming hardened and reliable enough to carry storage traffic. Of course, the Ethernet fabric must be specially configured with Quality of Service (QOS) and big buffers. This specific flavor of Ethernet is commonly referred to as Converged Enhanced Ethernet (CEE) or Data Center

page 15 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

Bridging (DCB) Ethernet. See Chart 18 below for our summary of Storage networking protocols and associated performance metrics.

Chart 18: Summary of Storage Access Networking Protocols ( μs = microsecond = 1/1,000,000 of a second )

NVMeOF NVMeOF NVMeOF NVMeOF FC FCoE iSCSI FC iSER Ethernet RoCE Infiniband

Operating System / Applications

NVMe-OF SCSI Layer NVMe-OF

FCP FCP FCP iSCSI FCP FCP iSCSI SRP NVMe NVMe NVMe FCIP iFCP iSER RDMA RDMA

TCP TCP TCP TCP

FCoE IP IP IP IP Infiniband DCB Fibre-channel Ethernet DCB Infiniband DCB Ethernet Infiniband Ethernet Ethernet

25G/50G/ 25G/50G/ 25G/50G/ 25G/50G/ 25G/50G/ 100G/ Bandwidth: 32Gbps 32Gbps 100Gbps 100Gbps 100Gbps 100Gbps 100Gbps 200Gbps

Latency: 400 μs 600 μs 400 μs 350 μs 100 μs 100 μs 10 μs 1 μs

IOPS: 500k 350k 350k 100k 190k 2. 2.3M 2.3M

Latency Improvement

Source: Jefferies Research, Mellanox Company Data

Storage Costs are Coming Down. According to Gartner, storage costs are about $544/TB for External Controller-Based Disk Storage or less than ~$0.50/GB and expected to fall ~5% / year going forward. (Source: Gartner Forecast: External Storage Systems, Worldwide, All Countries, 2015-2022, July 2018). As storage costs come down, the new unit economics make new business cases possible. For example, storage cost previously made long retention times for video surveillance systems unfeasible – 24 hours to 7 days was considered normal. Now with cheaper storage, video surveillance systems are being built with retention times of a month or longer and enterprises are building analytics engines on top of them to drive insights and improve ROI. At the same time that the Storage costs are declining, the cost of compute cycles continues to go down. Our discussion now shifts to the industry trends behind Compute.

Compute: Processor Evolution + PCIe 4.0 + GPUs Intel Processor Architecture Changes Drive HPC Upgrades; Purley Ramping. An important driver of the Supercomputer investment cycle and most Hyperscaler Data Center upgrades is the availability of new processor architectures from Intel. The current major platform upgrade is the Purley server platform which leverages the Skylake microarchitecture via the Intel Xeon Silver/Gold/Platinum scalable processor product family. Purley is the first platform to leverage the Skylake microarchitecture. The Platinum version of the product has up to 28 cores and is scalable to 8+ Sockets. Intel estimates that Purley delivers 1.41x to 2.38x (1.63x average) better performance on 13 representative HPC workloads versus their prior Broadwell E5-2600 series processor. Purley was made available to key customers in early 2017 and the first Purley-based

page 16 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

Supercomputers showed up on the November 2017 Top 500 list. We believe the Purley platform is still ramping with customers more broadly.

Further, the brisk pace of Intel’s “Tick-Tock” product release strategy has slowed in recent years which also contributed to smoother and more prolonged cycles. Intel previously delivered a “tick” and then a “tock” each alternating year. More recently, after the 14nm die shrink in 2014, Intel has communicated its intentions to deliver multiple tocks before the next die shrink to 10nm. We currently expect it to deliver the Cascade Lake microarchitecture upgrade in 2019 and then a concurrent die shrink and microarchitecture upgrade with Ice Lake and Cooper Lake in 2020. See Chart 19 below for a summary of Intel’s design and product release cycles.

Chart 19: Intel “Tick-Tock” Microarchitecture / Die Shrink Development Model Estimated based on Intel Announcements

2008 2010 2013 2015 2020

45nm 32nm 22nm 14nm 10nm “Tick” Penryn Westmere Ivy Bridge Broadwell Ice Lake

Intel Core Intel Intel microarchitecture Intel Intel Intel microarchitecture Intel microarchitecture Microarchitecture microarchitecture codename microarchitecture microarchitecture codename Cascade codename Cooper “Tock” codename Nehalem Sandy Bridge codename Haswell codename Skylake Lake Lake AVX2, FMA3, First high-volume server Up to 6 cores and Up to 8 cores & Up to 28 Cores, 6 Intel Optane Persistent Transactional Memory, Next Gen DLBoost: Quad-Core CPUs, 12MB Cache, 20MB Cache, 256- channels DDR4/CPU Memory, DLBoost: discrete L4 Cache BFloat16 128-bit SIMD Hyperthreading bit AVX, AES-NI 48 lane PCIe 3.0 VNNI, Security Mitigations 2007 2009 2011 2014 2017 2019 2020

Thurley Romley Grantley Purley Platform Platform Platform Platform - Nehalem and - 2Q11 launch - 3Q14 launch - 3Q17 launch Westmere - Sandy bridge - Haswell - Skylake

Source: Intel Company Data, Jefferies Research

Development of PCIe 4.0. Peripheral Component Interconnect Express (abbreviated as PCIe) is a high-speed serial computer expansion bus standard. Within Servers (as well as PCs and other computing platforms), the PCIe industry standard governs the “bus.” The bus is the data pathway on the motherboard that connects the CPU, memory and peripherals. The evolution of the bus is interesting for Mellanox – the company benefits when the bus speed between the CPU and peripherals is upgraded. As the amount of data that can flow between the CPU and memory increases, we usually require more data to flow into the Server itself. Hence, interconnect (NICs) have to get faster as well.

The current PCIe 4.0 standard, which was finalized in October 2017, doubles the data rates per lane on the Server bus (see Chart 20). We note that Intel’s CPUs are currently compatible with PCIe 3.0 – not 4.0. Although it hasn’t announced anything yet, we think Intel will include PCIe 4.0 capability in one of its next few design cycles considering AMD and IBM processors already have the capability. Looking forward, we note that the PCI- SIG standards organization expects to finalize the next iteration of the spec in Q2’19. That spec – called 5.0 – will double the maximum Server bus speeds again. Lastly, we note that gap between new iterations of the PCIe spec spanned an unusually long time – 7 years. As such, the new spec really should provide a new catalyst for higher interconnect speeds.

page 17 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

Chart 20: Pace of PCI Bus Evolution in Servers Has Accelerated

Year Standard Bandwidth Frequency/Speed 1992 PCI 1.0 1.06 Gbps (32-bit half-duplex) 33 MHz 1993 PCI 2.0 1.06 Gbps (32-bit half-duplex) 33 MHz 1995 PCI 2.1 2.11 Gbps (32-bit half-duplex) 66 MHz 1999 PCI-X 1.0 8.51 Gbps (64-bit half-duplex) 133 MHz 2002 PCI-X 2.0 17.02 Gbps (64-bit half-duplex) 266 MHz 2002 PCIe 1.0 32 Gbps (x16 8b/10b full-duplex) 2.5 GHz 2006 PCIe 2.0 64 Gbps (x16 8b/10b full-duplex) 5.0 GHz 2010 PCIe 3.0 126 Gbps (x16 128b/130b full-duplex) 8.0 GHz 2017 PCIe 4.0 252 Gbps (x16 128b/130b full-duplex) 16.0 GHz 2019E PCIe 5.0 504 Gbps (x16 128b/130b full-duplex) 32.0 GHz

Source: Jefferies Equity Research, PCI-SIG consortium

Increasing Relevance/Presence of GPUs in the Data Center. Intel’s acquisition of parallel processing veteran in 2015 for 8x (declining) sales signaled an important Chart 21: CPU / GPU Computing inflection in Cloud processing. Hyperscale players are increasingly running deep learning Speed Terminology: FLOPs (facial recognition), Big Data analytics, and security applications that run more efficiently T e r m N u m b e r o f Description on processors with parallel architectures like NVidia graphics chips (GPUs). With the FL O P s breakdown of Moore’s law, Intel’s clock speed and single-thread CPU performance has 21 Z e tta F L O P s 10 sextillions 18 leveled off and it has relied more heavily on increasing the core count to improve E xa F L O P s 10 quintillions 15 performance. NVidia has always optimized performance from this different angle with P e ta F L O P s 10 quadrillions 12 their very high core count GPUs and lower single-thread performance. NVidia’s Tesla T e ra F L O P s 10 trillio n s 9 V100 based on the Volta architecture has 80 physical cores and 5,120 logical cores versus G ig a F L O P s 10 b illio n s 6 Intel Skylake’s 28 physical core and 56 logical core architecture. The Tesla GPU can M e g a F L O P s 10 m illio n s 3 achieve 7,014 GigaFLOPS (Billion Floating Point Operations Per Second) of double- K ilo F L O P s 10 th o u s a n d s precision performance while the Skylake CPU achieves 2,240. Of course, the GPU Source: Jefferies Research performance assumes instructions can achieve maximum parallelization which is not always the case. Intel’s single thread performance is still industry leading, however we should see GPUs increasingly be used in the Data Center for repetitive, parallelizable tasks that take massive amounts of computing power (i.e. AI, Machine Learning, and Big Data). The role of GPUs in the Data Center is moving the industry forward and pushing the boundaries of price/performance of compute resources – great for Mellanox. Below, we shift our discussion to current technology advancements in Networking. Networking Continuing with our framework from Chart 5, we note that the pending transition from 100G to 400G in the Data Center spine should also help drive the Interconnect business at Mellanox. Chart 22, for example, illustrates the development of merchant switch silicon from Broadcom. As the chart shows, the Switch ASIC capacity has grown exponentially in recent years. Moreover, Broadcom – over the next several years – will double and quadruple the capacity of its current generation silicon with the Tomahawk 4 and Tomahawk 5 products.

page 18 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

Chart 22: Switch ASIC Capacity Has Grown Exponentially

Customer Semiconductor Chip ASIC Aggregate Bandwidth Sampling date GA Date Geometry BCM53262 240 Gbps 2008 2008 Trident 640 Gbps March, 2010 2011 40nm Trident 2 1.28 Tbps August, 2012 2013 40nm Tomahawk 3.2 Tbps September, 2014 2015 28nm Tomahawk 2 6.4 Tbps October, 2016 2017 16nm Tomahawk 3 12.8 Tbps December, 2017 2018 16nm Tomahawk 4 25.6 Tbps 2019E 2020E Tomahawk 5 51.2 Tbps 2021E 2022E

53x Expansion in Ethernet Switch Capacity in 10 years.

Source: Jefferies Equity Research, Broadcom specs

Similarly, Optical transceivers continue to evolve as well. Chart 23 illustrates. Of course, 400G optics are expected to start getting deployed in Data Center spine networks in 2019. As Data Center spines transition to 400G, it’s natural for the interconnects from Top-of-Rack Switches to Servers to migrate to higher 25/50/100G speeds as well. Of course, this is a positive for Mellanox.

Chart 23: Optical Transceiver Evolution – 1 to 400 Gbps in Two Decades

450 QSFP-DD, 400 Gbps

400 OSFP, 400 Gbps 350

300 Bandwidth, in Gbps 250

QSFP28, 100 Gbps 200 CFP4, 100 Gbps CFP, 100 Gbps CFP2, 100 Gbps 150 QSFP+, 40 Gbps 100 SFP+, 10 Gbps XFP, 10 Gbps SFP, 1 Gbps 50

0 2001 2002 2006 2009 2009 2013 2014 2013 2016 2017

Year of MSA/Spec Introduction

Source: Jefferies Equity Research

page 19 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018 Latency “Whack-a-Mole” Bottlenecks Shift Around in the Data Center – Storage No Longer the Weakest Link. As shown in Chart 24 below, the current bottleneck in the Data Center is the Server NIC to Top-of-Rack Switch connection (and PCIe 3.0). That’s a major shift from recent years where the technology bottleneck came from Storage infrastructure. We note that it’s a common practice for Data Center architects to build “balance” between Compute, Storage, and Networking resources in their networks. Without balance, one can slow the entire system down or create stranded resources. For example, if Storage is to slow (i.e. too much latency), there might not be enough data flowing to the CPUs for processing. Hence, investments in Servers aren’t getting fully utilized. Conversely, if CPU is too slow (under clocked or too few cores), the Network links might be running at low utilizations. Therefore, Networking investments might be getting stranded. If the Network is too slow (too much latency or not enough bandwidth), packets can’t flow from storage to the CPU fast enough for calculation which leads back to low CPU utilization and stranded investments.

The large Cloud providers are especially sensitive to cost efficiency and stranded resources. Their business strategy is predicated on high performance hardware running at high utilizations. The important thing about this concept for equipment vendors is that network builders, Cloud providers, and enterprises are most motivated to invest on upgrades to the weakest link (i.e. the bottleneck) that currently exists in the system. It makes sense that buyers would be unmotivated to upgrade the faster or more powerful components because it would just strand more resources there.

Based on our analysis, the shifting Data Center bottleneck now lies in the Server NIC to Top-of-Rack Switch connection. We note that the most commonly deployed Server NIC operates at 10 GB/s. These NIC cards utilize a 16-channel PCIe configuration which provides 15.75 GB/s of I/O bandwidth between the CPU and peripherals. With the CPU and Storage components of Data Center infrastructure advancing rapidly, it now makes sense for Data Center operators to upgrade these components and increase the total throughput of their systems. Specifically, customers should be motivated to upgrade to 25G NICs and PCIe 4.0 Server buses. Below, we discuss the 10G-to-25G NIC transition in greater detail.

page 20 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

Chart 24: Summary of Speeds, Feeds, and Bottlenecks in the Data Center (2005 – 2018) Core Spine/MoR Leaf/ToR Server NIC Bus CPU Storage Protocol Memory/Storage Access SERDES PCI-Express 3.0 100 Gb/s Up 100 Gb/s Up 100 Gb/s Up Intel Purley w/ Skylake: 2.5 GHz * NVMe w/ PCIe 4.0: 252 Memory: DDR4, 2.4 GT/s; 153.6 Gb/s 25.78 GHz, 2018 10 Gb/s (8 GT/s, 63.04 Gb/s @ x8, 100 Gb/s Down 100 Gb/s Down 10 Gb/s Down 64-bit; 160 Gb/s Gb/s @ x16 Persistent: NVM 5M IOPS, 163.84 Gb/s 25 Gb/s 126.4 Gb/s @ x16) PCI-Express 3.0 40 Gb/s Up 40 Gb/s Up 40 Gb/s Up Intel Purley w/ Skylake: 2.5 GHz * NVMe w/ PCIe 3.0: Memory: DDR4, 2.4 GT/s; 153.6 Gb/s 25.78 GHz, 2017 10 Gb/s (8 GT/s, 63.04 Gb/s @ x8, 40 Gb/s Down 40 Gb/s Down 10 Gb/s Down 64-bit; 160 Gb/s 126.4 Gb/s @ x16 Persistent: NVM 5M IOPS, 163.84 Gb/s 25 Gb/s 126.4 Gb/s @ x16) PCI-Express 3.0 Memory: DDR4, 2.4 GT/s; 153.6 Gb/s 40 Gb/s Up 40 Gb/s Up 40 Gb/s Up Intel Grantley w/ Broadwell: 2.4 SATA rev. 3.2: 16 Gb/s 25.78 GHz, 2016 10 Gb/s (8 GT/s, 63.04 Gb/s @ x8, Persistent: 60 Drive SSD 400k IOPS @ 4kB; 40 Gb/s Down 40 Gb/s Down 10 Gb/s Down GHz * 64-bit; 153 Gb/s SAS-3: 12 Gb/s 25 Gb/s 126.4 Gb/s @ x16) 13.11 Gb/s PCI-Express 3.0 Memory: DDR3, 1.6 GT/s; 102.4 Gb/s 40 Gb/s Up 40 Gb/s Up 40 Gb/s Up Intel Grantley w/ Broadwell: 2.4 SATA rev. 3.2: 16 Gb/s 10.31 GHz, 2015 10 Gb/s (8 GT/s, 63.04 Gb/s @ x8, Persistent: 60 Drive SSD 400k IOPS @ 4kB; 40 Gb/s Down 40 Gb/s Down 10 Gb/s Down GHz * 64-bit; 153 Gb/s SAS-3: 12 Gb/s 10 Gb/s 126.4 Gb/s @ x16) 13.11 Gb/s PCI-Express 3.0 Memory: DDR3, 1.6 GT/s; 102.4 Gb/s 40 Gb/s Up 40 Gb/s Up 40 Gb/s Up Intel Grantley w/ Broadwell: 2.4 SATA rev. 3.0: 6 Gb/s 10.31 GHz, 2014 10 Gb/s (8 GT/s, 63.04 Gb/s @ x8, Persistent: 60 Drive SSD 400k IOPS @ 4kB; 40 Gb/s Down 40 Gb/s Down 10 Gb/s Down GHz * 64-bit; 153 Gb/s SAS-3: 12 Gb/s 10 Gb/s 126.4 Gb/s @ x16) 13.11 Gb/s PCI-Express 3.0 Memory: DDR3, 1.6 GT/s; 102.4 Gb/s 40 Gb/s Up 40 Gb/s Up 40 Gb/s Up Intel Romley w/ Ivy Bridge: 2.3 SATA rev. 3.0: 6 Gb/s 10.31 GHz, 2013 10 Gb/s (8 GT/s, 63.04 Gb/s @ x8, Persistent: 60 Drive HDD 10k RPM, 5k IOPS @ 40 Gb/s Down 40 Gb/s Down 10 Gb/s Down GHz * 64-bit; 147 Gb/s SAS-2: 6 Gb/s 10 Gb/s 126.4 Gb/s @ x16) 4kB; 164 Mb/s PCI-Express 3.0 Memory: DDR2, 0.8 GT/s; 51.2 Gb/s 10 Gb/s Up 10 Gb/s Up 10 Gb/s Up Intel Romley w/ Ivy Bridge: 2.3 SATA rev. 3.0: 6 Gb/s 10.31 GHz, 2012 1 Gb/s (8 GT/s, 63.04 Gb/s @ x8, Persistent: 60 Drive HDD 10k RPM, 5k IOPS @ 10 Gb/s Down 10 Gb/s Down 1 Gb/s Down GHz * 64-bit; 147 Gb/s SAS-2: 6 Gb/s 10 Gb/s 126.4 Gb/s @ x16) 4kB; 164 Mb/s PCI-Express 3.0 Memory: DDR2, 0.8 GT/s; 51.2 Gb/s 10 Gb/s Up 10 Gb/s Up 10 Gb/s Up Intel Romley w/ Ivy Bridge: 2.3 SATA rev. 3.0: 6 Gb/s 10.31 GHz, 2011 1 Gb/s (8 GT/s, 63.04 Gb/s @ x8, Persistent: 60 Drive HDD 10k RPM, 5k IOPS @ 10 Gb/s Down 10 Gb/s Down 1 Gb/s Down GHz * 64-bit; 147 Gb/s SAS-2: 6 Gb/s 10 Gb/s 126.4 Gb/s @ x16) 4kB; 164 Mb/s PCI-Express 2.0 Memory: DDR2, 0.8 GT/s; 51.2 Gb/s 10 Gb/s Up 10 Gb/s Up 10 Gb/s Up Intel Thurley w/ Westmere: 2.13 SATA rev. 3.0: 6 Gb/s 10.31 GHz, 2010 1 Gb/s (5 GT/s, 32 Gb/s @ x8, Persistent: 60 Drive HDD 10k RPM, 5k IOPS @ 10 Gb/s Down 10 Gb/s Down 1 Gb/s Down GHz * 64-bit; 136 Gb/s SAS-2: 6 Gb/s 10 Gb/s 64 Gb/s @ x16) 4kB; 164 Mb/s PCI-Express 2.0 Memory: DDR1, 0.2 GT/s; 12.8 Gb/s 10 Gb/s Up 10 Gb/s Up 10 Gb/s Up Intel Thurley w/ Nahalem: 3.33 SATA rev. 2.0: 3 Gb/s 10.31 GHz, 2009 1 Gb/s (5 GT/s, 32 Gb/s @ x8, Persistent: 60 Drive HDD 10k RPM, 5k IOPS @ 10 Gb/s Down 10 Gb/s Down 1 Gb/s Down GHz * 64-bit; 213.1 Gb/s SAS-1: 3 Gb/s 10 Gb/s 64 Gb/s @ x16) 512B; 20.5 Mb/s PCI-Express 2.0 Memory: DDR1, 0.2 GT/s; 12.8 Gb/s 10 Gb/s Up 10 Gb/s Up 10 Gb/s Up Intel Stoakley w/ Harpertown: 3.4 SATA rev. 2.0: 3 Gb/s 10.31 GHz, 2008 1 Gb/s (5 GT/s, 32 Gb/s @ x8, Persistent: 60 Drive HDD 10k RPM, 5k IOPS @ 10 Gb/s Down 10 Gb/s Down 1 Gb/s Down GHz * 64-bit; 217.6 Gb/s SAS-1: 3 Gb/s 10 Gb/s 64 Gb/s @ x16) 512B; 20.5 Mb/s PCI-Express 2.0 Memory: DDR1, 0.2 GT/s; 12.8 Gb/s 10 Gb/s Up 10 Gb/s Up 10 Gb/s Up Intel Stoakley w/ Harpertown: SATA rev. 2.0: 3 Gb/s 10.31 GHz, 2007 1 Gb/s (5 GT/s, 32 Gb/s @ x8, Persistent: 60 Drive HDD 10k RPM, 5k IOPS @ 10 Gb/s Down 10 Gb/s Down 1 Gb/s Down 3.33 GHz * 64-bit; 213.1 Gb/s SAS-1: 3 Gb/s 10 Gb/s 64 Gb/s @ x16) 512B; 20.5 Mb/s PCI-Express 1.0 Memory: SDRAM PC166, 166 MT/s; 10.62 Gb/s 10 Gb/s Up 10 Gb/s Up 10 Gb/s Up Intel Bensley w/ Clovertown: 3.0 SATA rev. 2.0: 3 Gb/s 10.31 GHz, 2006 1 Gb/s (2.5 GT/s, 16 Gb/s @ x8, Persistent: 60 Drive HDD 10k RPM, 5k IOPS @ 10 Gb/s Down 10 Gb/s Down 1 Gb/s Down GHz * 64-bit; 192 Gb/s SAS-1: 3 Gb/s 10 Gb/s 32 Gb/s @ x16) 512B; 20.5 Mb/s PCI-Express 1.0 Memory: SDRAM PC166, 166 MT/s; 10.62 Gb/s 10 Gb/s Up 10 Gb/s Up 10 Gb/s Up Intel Irwindale Xeon 3.8: 3.8 GHz SATA rev. 2.0: 3 Gb/s 10.31 GHz, 2005 1 Gb/s (2.5 GT/s, 16 Gb/s @ x8, Persistent: 60 Drive HDD 10k RPM, 5k IOPS @ 10 Gb/s Down 10 Gb/s Down 1 Gb/s Down * 64-bit; 243 Gb/s SAS-1: 3 Gb/s 10 Gb/s 32 Gb/s @ x16) 512B; 20.5 Mb/s Mechanical/physical limitations of storage = Bottleneck access was the bottleneck in the data center for many years. With faster storage, the bottleneck has since moved to the network.

Source: Jefferies Research Ethernet: 10G to 25G+ Data Center NIC Transition – A Significant Opportunity for Mellanox Most Data Center Switching Ports Are Still 10G. The vast majority of Server to Top-of-Rack Switch connections in the Data Center are still 10G links. According to IHS Infonetics, the mix of 10G switching ports shipped into Data Centers at Q2’18-end was still 6.8 million/quarter, or 48% of total. For comparison, 25G port shipments in Q2’18 totaled 1.6 million, or 11% of total NIC shipments. Chart 25 illustrates the transitions between generations of Data Center switch ports.

page 21 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

Chart 25: Data Center Switching Port Shipments (2009 – 2022E) 80.0 400 GE 200 GE

70.0 Millions

60.0 50 GE 100 GE 50.0

40 GE 40.0 25 GE

30.0

Switching Port Shipments Port Switching 20.0 10 GE

10.0 1 GE 0.0

1GE 10GE 25GE 40GE 50GE 100GE 200GE 400GE

Source: IHS Infonetics, Jefferies Research

As shown in Chart 25, we’re still in the very early stages of the transition to 25G (and above) Ethernet. By the end of 2022, IHS Infonetics expects shipments of 38.5 million 25G/50G/100G Ethernet ports or 9.6 million/quarter. That’s a 33% CAGR over the next 4 years. We think the IHS Infonetics numbers will also prove to be conservative. Based on our conversations with industry contacts, they’re expecting the 10G-to-25G upgrade to progress much like the 1G-to-10G upgrade in 2011. From the point that 1G peaked and started declining, the market for 10G Ethernet ports experienced a 42% CAGR over the subsequent 4 years.

The Pitch for 25G. The progression to 25G for server uplink connections wasn’t always an obvious one. The industry has been used to re-using/shuffling technology downward in the stack to give it the longest life. 10G was once used for core and aggregation connections before it was installed for Server uplinks. At one point, it would have seemed logical in 2014-2016 for 40G (which was being used in the core layers at the time) to be subsequently used to upgrade server uplinks when more bandwidth was needed. A group spearheaded by Google and Microsoft had a different idea and started the 25G Ethernet Consortium in 2014. To some industry participants, it seemed unusual considering the 40G standard already existed but the benefits of 25G quickly became clear. Below, we outline the biggest benefits for the 25G, 50G, and 100G upgrade path for Server uplinks.

1. 10G Just Isn’t Fast Enough. As noted above, the evolution of Storage and CPU technology increasingly pushes the capacity bottleneck onto the Interconnect portions of the Data Center infrastructure. Hence, 25G, 50G, and 100G give Data Center buyers the ability to scale the NIC to meet their needs.

2. Cost. 40G NICs are simply not cost effective anymore. They use four “lanes” of 10G with an internal multiplexing function. Structurally, that means four lasers and a 4-way multiplexer at both ends of a Switch-to-Server connection. 50G requires two lanes of 25G with two lasers and a two-way multiplexer. It’s page 22 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

actually cheaper than 40G. A 25G interface simply uses a faster clock and SERDES than 10G (25 GHz versus 12.5 GHz) so it’s about 25% more expensive for 2.5x the speed.

3. Smoother Decoupling of Switch and Server Lifecycles. Switches and Servers in an ICP Data Center have different lifecycles and customers want to have the flexibility to upgrade components at different times. The average lifecycle of a server is 3-4 years while switches can run 7-10 years. Spine and Core layers are already at 100G for any new ports deployed. Therefore, the ability to operate server uplinks at 25G, 50G, or 100G is a great way to flexibly future proof. This decoupling is much harder at 10G and 40G.

4. No Need to Ditch Cabling. This is perhaps the best part of the 10G-to-25G upgrade for the ICPs. Naturally, they prefer to install cables only once. Cabling can be a big capital expenditure in the Data Center and the need to upgrade cables can often make an upgrade cost prohibitive. For the most part, ICPs with 10G infrastructure can make the upgrade to 25G simply by swapping out the transceivers and NICs. They continue to use their existing cabling which saves a lot of cost. Convergence of Compute, Storage and Networking Presents Opportunity for Mellanox One of the key trends in Data Center networks is that operators are looking for ways to converge multiple disparate communication protocols used for Compute, Storage, and Networking. As shown in Chart 26, it’s quite common for large Enterprises to utilize InfiniBand for Server-to-Server interconnect, Fibre Channel for Storage, and Ethernet for Networking. The management of multiple protocols and networks creates complexity, adds costs, and creates performance bottlenecks. Minimally, the convergence to a single protocol eliminates the need for multiple adaptors and reduces the cabling footprint. Our conversations with industry contacts suggest that Fibre Channel will have a long tail but is likely to be supplanted by FCoE/Converged Ethernet or InfiniBand over time.

page 23 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

Chart 26: Data Centers Are Moving Toward Converged Infrastructure

Infiniband Switch Infiniband Cluster

Fibre Channel Switch

Fibre Channel Storage

Ethernet Switch

Ethernet Network

RDMA over Converged Ethernet (RoCE) Infiniband Cluster

Ethernet Fibre Channel over Ethernet (FCoE) RoCE Switch FCoE

Fibre Channel Storage

Supporting protocols can be encapsulated in Ethernet and carried over the same rails. Removing the need for redundant hardware.

Ethernet Network

Source: Jefferies Research

Projections by market research firm IHS Infonetics (Charts 27 and 28 below) support this view – total Fibre Channel Switch and Board revenue is expected to remain flattish, while FCoE is expected to ramp. We believe that Mellanox’s InfiniBand solutions are very well suited to facilitate this convergence and is particularly attractive in light of their high performance (low latency, high throughput) and support for other protocols over InfiniBand via encapsulation (such as Ethernet, iSCSI, etc.).

page 24 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

Chart 27: Fibre Channel HBAs vs. Storage Ethernet NIC Chart 28: Fibre Channel HBAs vs. Storage Ethernet NIC Revenue ($, in millions) Shipments (in millions)

$600 ’17-122 CAGR (Revenue) 3.00 2.78 2.79 2.80 Fibre Channel HBAs: -6.9% 2.67 2.63 $508 $502 $487 Storage Ethernet NICs: 31.0% $500 $464 2.50 2.37 2.38 $431 2.09 $393 2.02 $400 2.00 $351 1.73 1.79 $307 1.48 $300 1.50 1.28 $234 1.11 $179 $200 1.00 ’17-122 CAGR (Units) $138 $105 Fibre Channel HBAs: -8.4% $100 $65 $80 0.50 Storage Ethernet NICs: 17.0%

$0 0.00 2016 2017 2018E 2019E 2020E 2021E 2022E 2016 2017 2018E 2019E 2020E 2021E 2022E

Fibre Channel HBAs Storage Ethernet NICs Fibre Channel HBAs Storage Ethernet NICs

Source: IHS Infonetics (September 2018) Source: IHS Infonetics (September 2018)

InfiniBand – A Primer As discussed in the product section below, Mellanox also sells standalone InfiniBand ICs, Switches and physical cables. It’s been the company’s core business traditionally. InfiniBand is a networking protocol that provides for very high performance, low latency data transmission. It was designed with an aim to improve the performance of applications running on processors. From a practical standpoint, it’s often implemented in a Host Channel Adaptor, or HCA (a HCA is a circuit board with a switch IC and a networking port). The InfiniBand specs are governed by the InfiniBand Trade Association (IBTA), which was founded in 1999 by IBM, Intel, Mellanox, Oracle, HP and . Notably, none of the incumbent Ethernet switch vendors are in this list. While InfiniBand is an open standard, it has been highly associated with Mellanox. The company has 85- 90% share of the InfiniBand interconnect market. The technology has traditionally been used for Server-to-Server interconnects, particularly for High Performance Computing (HPC) applications. As discussed in more detail below, InfiniBand is also increasingly being used in Storage applications, particularly for “backend” interconnect (i.e. the communication between Storage nodes). One of the key strengths of InfiniBand is that it provides very low latency, high throughput links that enables very high utilization of CPUs.

RDMA: Making the InfiniBand Protocol “Light”. The superior performance characteristics of InfiniBand are chiefly derived from the use of RDMA, or Remote Direct Memory Access. Effectively, RDMA allows applications to access and write to the memory of other applications with minimal involvement from CPUs (both the host and target processors). In other traditional protocols like Ethernet or Fibre Channel, the communication “pipe” to the world outside the Server or Storage entity is controlled by the Operating System (OS). This means that an application cannot access this communication pathway without involving the local OS and in turn encumbering the processor (CPU). The Host process handles most of the task of transporting data from the portion of memory allocated to the application (the “buffer space”), checking data integrity, signaling between the different layers of the networking stack, and sending it out a physical networking port. Memory also is consumed as multiple copies of data are created in memory through this process.

InfiniBand takes a different approach with RDMA. InfiniBand provides what’s called a “messaging service” for applications. The management of the protocol stack is handled in hardware (i.e. in the InfiniBand HCA) without the involvement of the host CPU and page 25 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

applications are allowed to write to each other’s respective memory directly. In effect, a channel is created between the allocated memory spaces of the two applications. Notably, InfiniBand is also a “lossless” protocol as a result of the flow control mechanism it employs. Conceptually, it’s a “credit-based” protocol, which means that it keeps track of the amount of memory that is available for writing and prevents transmission if space is not available. This contrasts with TCP/IP/Ethernet which handles congestion by allowing packets of data to be dropped and re-sent. The lossless nature of InfiniBand is one of the reasons its realized throughput is better than Ethernet.

Chart 29: InfiniBand Creates a Channel for I/O between Applications with Minimal Host CPU Involvement…

Virtual address space connection App Buf Buf App

OS OS NIC NIC

Physical Physical connection connection

Source: Jefferies Research, Mellanox Company Data

To highlight InfiniBand’s advantages, Chart 30 looks at a scenario where a workload moves between two Servers – once via InfiniBand and once via Ethernet. In the InfiniBand scenario, the workload moves across in 46 seconds, 36% faster than the 1:16 it takes in the Ethernet example. Also, notice the drastic difference in the destination CPU utilization rate. It’s clear that InfiniBand allows the CPU to run more efficiently and reserve processing cycles for non-networking tasks.

page 26 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

Chart 30: InfiniBand’s Benefits Extend to More Efficient (lower CPU utilization) and Faster Migration of Virtual Machines

When a VMware virtual machine is migrated to a destination machine, the destination machine CPU utilization is much lower with Infiniband than TCP/IP. The speed of migration also tested ~36% faster.

Source: openfabricalliance.org, VMware

InfiniBand Speeds and Nomenclature. The InfiniBand standard specifies speeds by Chart 31: InfiniBand Speeds physical “lanes.” As shown in Chart 31, each lane speed has been given a shorthand Total Link Bandw idth

Num ber of Lanes => 1x 4x 12x name. For example, Single Data Rate (SDR) = 2.5 Gbps per lane while Fourteen Data Rate Single Data Rate (SDR) 2 .5 G b p s 1 0 G b p s 3 0 G b p s (FDR) = 14 Gbps/lane. A typical InfiniBand connection deployed in a Data Center has 4 Dual Data Rate (DDR) 5 G b p s 2 0 G b p s 6 0 G b p s lanes, thus the fastest available technology today is EDR. EDR was made available in 2014 Quad Data Rate (QDR) 1 0 G b p s 4 0 G b p s 1 2 0 G b p s

Fourteen Data Rate (FDR) 1 4 G b p s 5 6 G b p s 1 6 8 G b p s by Mellanox and is most commonly deployed at 104 Gbps. As shown in Chart 32, the Enhanced Data Rate (EDR) 2 6 G b p s 1 0 4 G b p s 3 1 2 G b p s InfiniBand roadmap aims to deliver High Data Rate (HDR) next (4x = 200 Gbps). Mellanox High Data Rate (HDR) 5 0 G b p s 2 0 0 G b p s 6 0 0 G b p s made HDR available in late 2017 and expects to deployments to begin for revenue in Source: InfiniBand Trade 2H’18. Association Notably, InfiniBand speeds have stayed meaningfully (i.e. years) ahead of Ethernet since inception. SDR (10G) has been available from Mellanox since 2002 while 10G Ethernet at the Server level has really only been shipping commercially since 2007. Similarly, EDR came out in early 2015 versus 100G Ethernet solutions at the Server level that took a couple of years longer. This InfiniBand technology lead advantage is slipping away however. Based on Mellanox’s product announcements and our conversations with industry contacts, the InfiniBand speed gap is essentially closed as Mellanox’s ConnectX-6 line of adapter cards supports 200G for both InfiniBand (HDR) and Ethernet configurations. Although the speed advantage has closed, we still expect InfiniBand to continue have superior performance characteristics (latency/CPU utilization) versus Ethernet.

page 27 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

Chart 32: Infiniband Speed Roadmap

Year Standard 1X 2X 4X 8X 12X 2005 DDR 5 Gb/s NA 20 Gb/s 40 Gb/s 60 Gb/s 2007 QDR 10 Gb/s NA 40 Gb/s 80 Gb/s 120 Gb/s 2011 FDR 14 Gb/s NA 56 Gb/s 112 Gb/s 168 Gb/s 2014 EDR 26 Gb/s NA 100 Gb/s 200 Gb/s 300 Gb/s 2017 HDR 50 Gb/s 100 Gb/s 200 Gb/s 400 Gb/s 600 Gb/s 2020E NDR 100 Gb/s 200 Gb/s 400 Gb/s 800 Gb/s 1.2 Tb/s Future XDR ? ? ? ? ?

*4X is the most common aggregate link configuration; links can be aggregated from 1 lane (1x) to 12 lanes (12x) Source: Jefferies Research, Infiniband Trade Association

The Rise of Massively Parallel Cluster Computing Drives the Need for Efficient Interconnect. The field of supercomputing was arguably pioneered in the 1960s by Seymour Cray at Control Data Corporation and later at Cray Research in the form of single proprietary systems. For many years, supercomputers were mainframes. Over time, the mainframes have been supplanted by thousands of general purpose Servers working together in parallel to perform intense computational tasks. Given this fundamental change in architecture, it makes sense that a high performance Interconnect like InfiniBand would see increasing adoption vis-à-vis less efficient protocols such as Ethernet. Chart 33 illustrates. InfiniBand adoption was driven by the speed and latency advantages of the lightweight protocol and the RDMA architecture.

Chart 33: Interconnect Share by Type – Top 500 Supercomputers 350

Infiniband’s share of the top 500 steadily rose from 300 2004-2015 to over 50% but has declined since then. Ethernet has picked up the most share.

250

200

150

100

50

0

Infiniband Ethernet Custom Interconnect Other

Source: Top500.org, Jefferies Research

More recently, non-traditional HPC players (Web 2.0 and Cloud companies) have been submitting their results for inclusion in the Top 500. This has been somewhat controversial in the HPC community because Cloud and Hyperscale systems have the computing power but they’re designed for difference purposes than traditional HPC page 28 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

systems. They have a mass market business case and are used for many other purposes than modeling and simulation. Hyperscalers generally have infrastructures with 100s of thousands of servers/compute nodes in a data center versus traditional HPC with under 10 thousand. Since Hyperscalers use their infrastructures for many other business cases, they tend to use Ethernet for its flexibility and security. Mellanox estimates that nearly half of the systems on the latest Top 500 list are non-traditional HPC systems (i.e. Hyperscalers largely connected by Ethernet). As a result, we see the decline of InfiniBand powered systems in the Top 500 list as a proliferation of the use case for Supercomputing rather than a shift in HPC buying behavior from InfiniBand to Ethernet. According to Mellanox, InfiniBand powers 60% of the traditional HPC systems on the list. Also, it’s also important to note that Mellanox is the Interconnect vendor for all 25Gbps and above Ethernet systems on the list. When we combine Mellanox InfiniBand and Ethernet solutions, their share of the Top 500 list is 43%. Net-net, we expect the InfiniBand market to remain flat-to-slightly-up on a go-forward basis. Ethernet NICs – A Fast-Growing Market Chart 34 highlights the market growth for the overall Ethernet NIC market. As the diagram shows, the market is expected to grow quickly – the overall space is projected to hit $3.1 billion in 2022, at 5-year CAGR of 22%. We note that Basic NICs are expected to decline rapidly as a percentage of the overall market. Offload and Programmable NICs are increasingly favored as they help customers get more productivity out of their Data Center investments. If we look at just the anticipated growth for the combination of Offload- and Programmable NICs (i.e. where Mellanox plays), the market is expected to grow from $625 million in 2018 to $2.66 billion in 2022, a 34% CAGR over 5 years.

Chart 34: Total Ethernet NIC Market Size (2016 – 2022E), Revenue (USD in millions)

$3,500 $3,110 $3,000 $2,538 $2,500 $2,078 $2,000 $1,705 $1,402 $1,500 $1,153 $963 $1,000

$500

$0 2016 2017 2018E 2019E 2020E 2021E 2022E Basic NIC Offload NIC Programmable NIC

Source: IHS Markit, September 2018

page 29 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

Parsing the market for just 25G, 40G, 50G, and 100G Ethernet NIC cards (i.e. 25G and above), we note that the market is expected to grow at a 48% CAGR over the 2018-2022 time frame. Chart 35 illustrates.

Chart 35: Ethernet NIC Market Size for 25G, 40G, 50G, and 100G (2016 – 2022E), Revenue (USD in millions) $3,000

$2,445 $2,500

$2,000 $1,737

$1,500 $1,226

$1,000 $857

$578 $500 $350 $209

$0 2016 2017 2018E 2019E 2020E 2021E 2022E

Basic NIC Offload NIC Programmable NIC

Source: IHS Markit, September 2018

Chart 36 reinforces our Industry Discussion above. Driven by significant new end- market demand (i.e. traffic growth) and enabled by new technology developments in Storage, Compute, and Networking, the market will shift rapidly to 25G (and above) NIC cards over the next several years. This is a core product line for Mellanox. We estimate that it currently accounts for one-third of company sales with significant growth in the future. Below, our discussion shifts to Mellanox’s business itself.

Chart 36: Speed Transitions in the Ethernet NIC Market (2016 – 2022E) Port Shipments 2016 2017 2018E 2019E 2020E 2021E 2022E 1G 7,366,070 6,779,479 5,564,855 3,996,010 2,229,265 1,048,594 398,161 10G 7,068,136 7,932,288 8,350,857 9,104,119 9,771,571 9,481,563 7,868,719 25G 297,223 1,019,883 2,823,909 4,600,361 7,170,907 10,387,763 13,988,189 40G 1,016,733 895,734 1,185,158 1,417,188 1,185,304 547,316 95,095 The Ethernet NIC Market 50G 73,477 160,773 209,246 333,582 429,183 534,261 668,919 is Expected to Shift 100G 37,658 187,667 326,923 668,380 1,250,535 2,205,353 3,658,694 Rapidly to 25G and 100G Total 15,859,296 16,975,824 18,460,948 20,119,640 22,036,766 24,204,850 26,677,777 NICs. 1G Will Diminish Quickly While 40G is Port Shipments (% of Total) 2016 2017 2018E 2019E 2020E 2021E 2022E Viewed as an Interim 1G 46.4% 39.9% 30.1% 19.9% 10.1% 4.3% 1.5% Technology 10G 44.6% 46.7% 45.2% 45.2% 44.3% 39.2% 29.5% 25G 1.9% 6.0% 15.3% 22.9% 32.5% 42.9% 52.4% 40G 6.4% 5.3% 6.4% 7.0% 5.4% 2.3% 0.4% 50G 0.5% 0.9% 1.1% 1.7% 1.9% 2.2% 2.5% 100G 0.2% 1.1% 1.8% 3.3% 5.7% 9.1% 13.7% Total 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0%

Source: IHS Markit, September 2018

page 30 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018 Company Overview Mellanox, based in Yokneam, Israel, and Sunnyvale, CA, is a leading supplier of high- performance Interconnect solutions for Data Center networks. Their products – sold as Circuit Boards, Integrated Circuits (ICs), Cables, and Switches – are used to connect Servers, Storage, and Networking devices together in Data Centers. Historically, Mellanox has been leveraged to the High Performance Computing (HPC) / supercomputer market – primarily with their InfiniBand products. The company is the market-share leader in that space with ~85% market share of InfiniBand Host Channel Adaptors worldwide. With a new product initiative that began in 2013, Mellanox has emerged as a leader in Ethernet products as well. They are now the number 1 provider of Ethernet Network Interface Cards at 25G speeds and higher. From a customer perspective, the HPC space now contributes less than 40% of sales with the balance coming from Storage, Web 2.0/Cloud companies, and traditional enterprises. Mellanox acquired privately-held Kotura and IPtronics in 2016 and public company EZchip in 2016, the components from which provide Mellanox with System-on-a-Chip (SOC) and Silicon Photonics (SiP) technology. The company generated 2017 revenue of $864 million and employed 2,448 people at year-end 2017. Chart 37 highlights the success it has had in recent years.

Chart 37: Mellanox Annual Revenue (2005 - 2020E), ($ in millions) 1,600 $1,439 1,400 $1,263 EZchip acquisition added $20- 1,200 25m in quarterly revenue in 2016 ($80-100m annually). $1,078

1,000 Mellanox attributed the 2012 bulge to pent-up demand for $857 $864 Intel’s Romley platform and 800 customer inventory build

Sales $658

600 $501 $463 $391 400 $259 $155 200 $116 $84 $108 $49 $12 0

*Estimate years are Jefferies estimates Source: Jefferies Research

Leader in High-Performance Interconnect. Mellanox, which was founded in 1999, grew up with the Supercomputing market through the 2000’s. It was one of the first to push the InfiniBand architecture and standards forward in 2000. Over the subsequent 10 years, Mellanox continued to innovate with its InfiniBand portfolio. Over that period, InfiniBand’s share of supercomputers in the Top 500 went from 1% in 2003 to 52% in 2015. Since 2015, InfiniBand’s share of the Top 500 systems has declined slightly due to the rise of Ethernet as well as Internet Content Providers’ push to be included on the list. Mellanox’s share of overall systems – i.e. including InfiniBand and Ethernet – has held strong at 43% however. The company’s reputation and relentless focus on performance and latency have solidified its status as the leader in high-performance interconnects. We believe this position will serve it well as the rise of Cloud Services, Big Data, AI and

page 31 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

Machine Learning prompt a broader commercialization of High-Performance Computing practices.

The Strategic Push into Ethernet. Mellanox’s experience with InfiniBand helps its development of Ethernet solutions. Here, it has been able to leverage a lot of its InfiniBand experience and technology (low latency, CPU offloading, and RDMA). Its Ethernet products primarily go into Hyperscale (Web 2.0 & Cloud) and Enterprise Data Center applications to interconnect servers and storage infrastructure. We note that the architecture for Enterprise Data Centers are increasingly starting to look like Hyperscale and Private Cloud networks – hyper-converged Infrastructure vendors are building the efficiencies of Cloud infrastructure into its products. As shown in Chart 38 below, Ethernet products have been ramping over the past two years. Moreover, they’re the core growth driver for the business. As of Q3’17, Ethernet products exceeded InfiniBand sales for the first time in company history and accounted for more than 50% of sales.

Chart 38: Mellanox Revenue Trend by Protocol ($, in millions) (Q1’2012 – Q2’2018) $180

$160

$140

$120

$100

$80

$60

$40

$20

$0

Infiniband Ethernet Other

Source: Jefferies Research

Products Mellanox is a provider of end-to-end “Interconnect” solutions. Simply put, Interconnect products allow the transfer of data between various network compute and storage hardware elements such as Servers, Switches (an Interconnect product itself), and Storage arrays. Interconnection between network and compute resources uses communication protocols such as Ethernet, InfiniBand (where Mellanox is dominant), Fibre Channel, and the emerging Fibre Channel over Ethernet (FCoE) protocol. Mellanox is a largely a vertically integrated supplier; the company develops its own Ethernet and InfiniBand chips and Switches as well as cables (though the company uses external partners like TSMC and ASE to fab chips). As the company states, it provides Interconnect from “PCI Bus to PCI bus.” We believe that control of the physical layer technology is growing in importance and vertically-integrating is an effective way to compete in networking. Chart 4 earlier in this report illustrates the company’s products in hyperscale and enterprise data centers. Chart 39 below illustrates the organization’s products in High- Performance Computing. page 32 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

Chart 39: Mellanox: An End-to-End Interconnect Provider

Mellanox Infiniband AOC Mellanox Infiniband DAC Mellanox Infiniband Mellanox Infiniband Mellanox Infiniband Cables and Transceivers Cables and Transceivers Director Switches Edge Switches Adapter Cards (40G,56G,100G,200G) (40G,56G,100G,200G)

Each Rack Each Node/Server 10 – 20 Switches per system

Compute

Switching

Storage

Other Infrastructure Example Supercomputer floor plan and rack layout HPC Supercomputer (Mostly Infiniband)

Source: Company Data, Jefferies Equity Research

Chart 40 and Chart 41 highlight the company’s mix of business by Technology and Product Form Factor.

Chart 40: Revenue by Technology – Q2’18

Other, 3.3%

InfiniBand, 38.0%

Ethernet, 58.7%

Source: Company Data, Jefferies Equity Research

page 33 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

Chart 41: Revenue by Product Form Factor – Q2’18

IC Revenue, 11% Cable/Other, 17%

Switch System, 21%

Board Revenue, 51%

Source: Company Data

Ethernet Products Overview Mellanox sells end-to-end Ethernet products including Network Interface Cards (NICs), Switches, NIC & Switch ICs, Cables, and Transceivers. In Q2’18, Ethernet product sales totaled $157.5 million, or 59% of company sales. We expect the mix to continue to grow as a percentage of sales as the business is outpacing InfiniBand growth.

ConnectX Ethernet NICs – Mellanox brands its Ethernet NICs under its ConnectX product line. The “X” represents the fact that its network adapters are multiprotocol and capable of supporting Ethernet or InfiniBand on the same card. Mellanox sells an Ethernet version and a version with Virtual Protocol Interconnect (VPI) which can actively sense the protocol of the traffic and support Ethernet or InfiniBand. Mellanox adapters come in Ethernet speeds of 10, 25, 40, 50, 100, and 200 Gbps and a variety of single-port and dual-port configurations. The competitive advantage and differentiation of Mellanox NICs are the industry leading sub-microsecond latency and the variety of CPU, security, and storage offload capabilities like OVS offload, SR-IOV, IPsec/SSL, iSER, and NVMe over fabric. Mellanox NICs have the most advanced multi-host features and have an internal PCIe switch enabling the NIC to create an internal network between sockets on the server. Ethernet NICs represent the vast majority of Mellanox’s Ethernet sales. The company doesn’t disclose the breakout but we estimate that Ethernet NICs represent approximately 60-70% of total Ethernet sales and 30-40% of total company sales.

LinkX Cables and Transceivers – Mellanox brands its Ethernet and InfiniBand cables and transceivers under its LinkX product line. LinkX products are available in both Ethernet and InfiniBand protocols and SFP and QSFP form factors. Copper Direct Attach Cables (DAC) are primarily used in short reach, lower bandwidth applications (i.e. server to top-of-rack switch or cross connect). DAC splitter cables and adapters are also used to take multiple 10G or 25G links into a larger aggregated port on a Top-of-Rack switch. DAC cables are becoming less relevant as speeds continue to increase because optical connections become much more of a necessity. Its 10Gbps DACs come in a max length of 7m and its 100Gbps DACs come in a max length of 5m. Mellanox also has a full portfolio of Active Optical Cables (AOCs) which use optical transceivers and optical fiber transmission. AOCs are used for high-bandwidth, longer distance use cases like Top-of- Rack to End-of-Row or Router connections (across rows and between racks). AOCs are page 34 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

increasingly being used for Server to Top-of-Rack connections because of their higher bandwidth capabilities, lower error rates, and reliability. Mellanox also sells transceivers with data rates and form factors ranging from 1Gb/s SFP and 10Gb/s SFP+ to 100Gb/s QSFP28. LinkX cables and transceivers fall into Mellanox’s Cable/Other category which represents approximately 15-20% of sales (Ethernet & InfiniBand). Specifically, Cables/Other represented $154 million for fiscal 2017, or 18% of sales.

Spectrum Ethernet Switches – Mellanox brands its Ethernet switches under the Spectrum product line. It has a broad portfolio of Top-of-Rack switches that range from 12 to 128 ports. Mellanox does not have Ethernet switch products for use cases that go beyond the Top-of-Rack like Spine, Core, or Chassis-based switches.

Mellanox switches focus on industry-leading performance, latency, and openness of the hardware. The company develops its own switching ASICs for use in its products and also sells ASICs as a merchant supplier. Its Spectrum ASIC offers 4.76bn packets-per-second of forwarding capacity delivering wire speed performance. The Spectrum-2 ASIC offers 9.52bn packets-per-second of forwarding capacity and also adds layer 3 IP routing functionality. Mellanox has maintained openness in its switching products and while it develops its own switching software called Mellanox Onyx, it designed its hardware to be compatible with any other standard Linux-based switch operating system. Specifically, Mellanox has been tested extensively with Cumulus-Linux, Switchdev, the Switch SDK, and Microsoft’s SONiC.

Ethernet NIC & Switch Integrated Circuits – Mellanox also sells its InfiniBand and Ethernet NIC and Switch ASICs as a merchant supplier. This allows customers to deploy its technology in any customized form factor they choose. The organization reports results for products it sells as a Merchant IC supplier in its IC Revenue segment. That area accounts for roughly 15-25% of revenue. ICs (both InfiniBand and Ethernet) accounted for $161.2 million in 2017 sales, or 19% of total revenue.

InfiniBand Products Overview Mellanox also sells end-to-end InfiniBand products including Network Interface Cards (NICs), Switches, NIC & Switch ICs, Cables, and Transceivers. In Q2’18 InfiniBand products accounted for 38% of company sales (see Chart 40 above). That mix of sales is expected to continue to decline given the growth it is seeing in Ethernet NICs. Mellanox’s InfiniBand products are used in high-bandwidth, low-latency applications like High- Performance Computing and Storage. Because these use cases require passing vast amounts of data back and forth for calculation by CPUs and GPUs, throughput and latency of the interconnect are vital to overall system performance. For storage, particularly in Hyperscaler and Enterprise Data Centers, InfiniBand is primarily used for back-end Storage Interconnect. Recently, the use of InfiniBand for front-end storage connections is becoming more common, however it’s limited to “HPC-like” areas of Hyperscaler networks. InfiniBand switches can be installed near the top-of-rack and InfiniBand adapters can be installed in storage array appliances and/or servers. InfiniBand links can be used for storage-to-storage connections (back-end) or storage-to-server connections (front-end).

ConnectX – As stated above, Mellanox’s ConnectX line includes multi-protocol adapters that can support InfiniBand and Ethernet. InfiniBand NICs are commonly referred to as Host Channel Adapters (HCA). They’re considered the highest-performing Interconnect solution for Data Centers. Mellanox InfiniBand HCAs come in a variety of speeds and one- port and two-port configurations. Speeds include 20, 40, 56, 100, and 200 Gbps and the latest generation of the product achieves industry leading sub-600 nanosecond latency.

LinkX – As stated above, Mellanox’s LinkX DACs, AOCs, and transceivers come in InfiniBand and Ethernet versions. Cable speeds include QDR (40G), FDR10 (40G), FDR page 35 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

(56G), EDR (100G), HDR100 (100G) and HDR (200G). InfiniBand LinkX products are designed with HPC applications in mind. Mellanox cables have an end-to-end Bit Error Rate less than 1E-15 which is 1,000x better than the nearest competitor.

SwitchIB – Mellanox has a full portfolio of Edge and Director InfiniBand switches supporting 40, 56, 100, and 200Gbps port speeds and the highest port density in the industry. Its InfiniBand switches come in port configurations of 8 to 800 ports that can scale out to thousands of nodes. The real value of Mellanox interconnect gear is the high bandwidth and low latency but they also have a variety of other important features like real-time scalable network telemetry, InfiniBand Routing, InfiniBand-to-Ethernet gateway capabilities, granular QoS, and easy set up and management. The company’s latest release is its “Quantum” branded InfiniBand HDR 200Gb/s switches which began shipping in late 2017. The Quantum line includes the QM8700, a 40-port non-blocking HDR InfiniBand Edge (Top-of-Rack) switch and the CS8500, a 29RU chassis Director Switch enabling up to 800 HDR 200Gb/s ports or 1,600 HDR 100Gbps ports and 320Tb/s of aggregate switching capacity. Product Differentiation = Speed, Low Latency, and Time-to-Market We conducted a number of calls with customers, partners, and resellers of Mellanox products (both Ethernet and InfiniBand) to better understand the differentiation inherent in its products. Through our conversations, we understand that Mellanox has consistently delivered the highest speed links and lowest latencies available in the market. Moreover, it has been faster-to-market than its competitors.

CPU Offloads. Mellanox achieves low-latency and high-performance through its unique “offloading” architecture and technologies. To put it simply, offloading is taking a common and repetitive process – previously done via the CPU – and “offloading” it to the NIC. Offloads are often implemented in hardware which makes them faster than if performed through software in the CPU. By offloading networking functions from the CPU and implementing them in hardware, Mellanox can improve CPU utilization and the speed of the overall system. This is key for Hyperscale customers. Below, we’ve outlined a few of the more popular offloads customers utilize in its ConnectX-4, ConnectX-5, and ConnectX-6 NIC products. We note that these offload engines are not trivial to implement – many are hardware offloads designed into silicon on the NIC.

1. OVS Offload – Open vSwitch (OVS) allows Virtual Machines (VM) on a server to communicate with each other and the outside world. OVS traditionally resides in the hypervisor and switching is done through the CPU. OVS software is very CPU intensive, affecting system performance. Mellanox implements the OVS data-plane in NIC hardware while maintaining OVS control-plane unmodified (on the CPU).

2. SR-IOV Offload – Single Root I/O Virtualization (SR-IOV) is a more efficient method of allowing VMs to share hardware resources (i.e. the NIC). Software- based sharing adds unnecessary CPU overhead to each I/O operation due to the emulation layer between the guest driver and NIC. The CPU has to shuffle packets to, from, and between VMs. SR-IOV implements virtual network functions for each VM and sets up separate queues so the packets can come into the NIC, get sorted, and go directly to the appropriate VM without CPU intervention. Mellanox takes this a step further and implements it all in NIC hardware with QoS features and High Availability Redundancy.

3. Network Overlay Offload (VXLAN, NVGRE, and GENEVE) – To work around the 4,096 scaling limit of VLANs, Hyperscalers encapsulate Layer 2 page 36 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

Ethernet frames in Layer 3 IP/GRE packets or 4 UDP datagrams using overlay protocols (i.e. VXLAN, NVGRE, and GENEVE). This allows virtualized environments to scale to 16 million virtual networks instead of 4,096. Once frames are encapsulated, traditional NICs can no longer perform network offloads on the packets. Mellanox adapters can parse and understand the network overlay protocols (VXLAN, NVGRE, and GENEVE) and thus, are able to fully offload their network processing. This enables more bandwidth, greatly improved VM density, and a better optimized CPU utilization on the host machine.

4. Memory Access Offload (RDMA) – Remote Direct Memory Access (RDMA) enables a device to access remote host memory directly without involvement of a host’s CPU. The benefit is lower CPU utilization for data intensive operations. Without RDMA, data has to traverse between Servers via both host CPUs and network software stacks. Mellanox adapters have RDMA hardware offload capabilities for InfiniBand and RDMA over Converged Ethernet (RoCE) hardware offload capabilities for Ethernet.

5. Storage Offloads (Encryption, iSCSI, iSER, erasure encoding, NVMe over Fabric) – Mellanox NICs have hardware offload capability for a variety of storage networking functions and protocols including iSCSI, iSCSI extensions for RDMA (iSER), and Non-Volatile Memory Express over Fabrics (NVMeOF). These offloads meaningfully improve storage latency, IOPS, bandwidth, and CPU utilization. Mellanox NICs also have hardware offloads for encryption and erasure encoding.

6. Security Offloads (Innova IPsec/SSL) – Mellanox makes a version of its Ethernet NICs with an on-board Kintex FPGA for IPSec SSL offload. The card is branded under the Mellanox Innova line of Smart NICs. The Innova IPsec adapter uses FPGA-based AES-GCM and AES-CBC cryptographic engines to efficiently offload IPsec compute intensive encryption and authentication tasks from the CPU. Lastly, we note that Mellanox NICs also support a much longer list of offloads – above and beyond the list we’ve noted here.

Time-to-Market Advantage. From our conversations with customers and resellers, we understand that their success is also a result of their faster time-to-market with the highest bandwidth products. For example, the company has had 40G Ethernet NICs available since 2010 – even before Switching infrastructure was available to support 40G. It has also been selling InfiniBand products at speeds above 40G since 2008. Mellanox was also first to market with 25G, 50G, and 100G Ethernet NICs launching those solutions in early 2015. The company was actually a founding member of the 25G/50G Ethernet Consortium, along with Microsoft, Google, Arista and Broadcom. Mellanox launched 200G in late 2017 and expects to see revenue associated with 200G deployments in 2H’18 – it’s the only vendor with 200G-capable NICs available today.

Separately, we’ve been told that Mellanox has been very quick to develop customers’ feature requests into shipping products. One customer noted that it was able to develop a particular feature and deliver it in a shipping product within 6-8 weeks. The company gets very good comments about its responsiveness. Our conversations with customers suggest that the company is at least 6-18 months ahead of other market players in terms of speeds, latencies, and offloading features. Looking forward, we believe Mellanox will continue to out-execute its competitors on this front.

Virtual Protocol Interconnect (VPI). One key feature (and example of new feature development) that Mellanox offers for its ICs and adapter cards is Virtual Protocol Interconnect, or VPI. VPI allows for the company’s interconnect products to auto-detect if page 37 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

it is connected to an Ethernet, InfiniBand or Fibre Channel link. The VPI feature can also be used to dynamically re-configure the network to use any of these protocols. Mellanox has publicly discussed instances of customers experimenting with flipping between Ethernet and InfiniBand in its Data Center. We believe this feature is a very compelling means to help customers become more comfortable with InfiniBand, which based on current mind share will have a difficult time displacing Ethernet for many use cases.

Leveraging Its InfiniBand Expertise into Ethernet. As discussed above, Mellanox increasingly is offering Ethernet-based solutions. The company has approached this market from multiple angles. It includes low-cost Ethernet-only NICs and ICs, InfiniBand adaptors with the VPI feature (discussed above), as well as Switches that embrace SDN and open protocols such as OpenFlow and OpenStack. The company also offers products supporting RDMA-over-Converged-Ethernet (RoCE) as a means to leverage its RDMA experience to include some InfiniBand-like features into Ethernet.

Overall, we like Mellanox’s strategy around Ethernet. It allows the company to directly address the persistent resistance by Enterprises around “Ether-Not” protocols (i.e. InfiniBand and Fibre Channel). Based on our conversations with industry contacts, many customers have deep-rooted resistance to adopt any products that aren’t Ethernet given a prevailing view that eventually, all networks will converge to Ethernet. At this point, our checks suggest RoCE adoption remains relatively negligible, but we’re encouraged by increasing activity around InfiniBand and RDMA from large vendors such as Microsoft, which has begun to support InfiniBand and RoCE on its Windows Server-based storage. More recently, it has set up InfiniBand networks and made RDMA available to customers provisioning their HPC instances. Microsoft also has been a Steering committee member of the IBTA (InfiniBand Trade Association) since November 2013.

Bluefield Smart-NICs – A Sizable Long-Term Opportunity BlueField is Mellanox’s new Smart-NIC product offering. The basic idea behind a Smart- NIC is to offload and process even more types of work from the host CPU. As such, the company’s BlueField product improves upon the functions of its ConnectX5 NIC product by adding a discrete processor. That processor – which is based on the company’s February 2016 acquisition of EZchip – expands the NIC’s offload capabilities from Layer 2- 3 functions to also include some Layer 4-7 (application layer) functionality.

Looking back, we note that EZchip was a fabless semiconductor company providing Carrier Ethernet Network Processor Units (NPUs). Network Processors are akin to a CPU in a server and fall somewhere between an ASIC (application specific integrated circuit) and a multi-core CPU in terms of functionality; NPUs bring the high throughput of an ASIC but the programmability of a multi-core. EZchip's NPUs (NP1 through NP5) were primarily used in Edge Routers for carrier customers. Shortly after the acquisition, Mellanox started development of its Bluefield Smart-NICs which integrate EZchip’s system-on-chip (SoC) technology with Mellanox’s existing IP to add intelligent Layer 4-7 capabilities. In late 2017, Mellanox made the decision not to continue investing in the traditional EZchip NPU (it lost some design wins) and focus only on the new Bluefield product line. It layered in additional functionality from EZchip’s prior acquisition of Tilera in 2014, which added multi-core functionality. Hence, EZchip was well on its way to integrating the Layer 2-3 capabilities of its NPUs with Layer 4-7 capabilities of Tilera’s multicore network processors. Before getting acquired by Mellanox, EZchip had intended to have an integrated product by 2018 and beyond. This chip is the basis for Mellanox’s current Bluefield offering.

The new Bluefield chip is ARM-based. It has 72 cores and each core can handle a flow. Bluefield-based products have been available since Q4’17 and they’re seeing good initial results. The company recent disclosed that it is seeing a “pretty healthy momentum of design wins for BlueField” all over the world. Further, it is currently working with Hyperscale customers for software development on the product – both in the U.S. and page 38 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

China. It is also working with Storage companies to use BlueField as the Storage controller solution.

Looking forward, it expects it to take a while for Bluefield wins to really start to ramp however – it is suggesting the back half of 2019 and 2020 to start to see meaningful growth from the new products. With more networking functions getting moved to Data Center NICs over time, Mellanox expects Bluefield to be a key enabler. The company sees the TAM for the product as $2 billion/year, $1 billion of which will be addressable by Mellanox.

Sales & Marketing Mellanox runs a multi-pronged selling and marketing effort that includes direct sales, channel sales, and OEMs. Chart 42 highlights this strategy while Chart 43 outlines its Storage OEM relationships.

Chart 42: Mellanox Customers, Distribution & End Markets

Source: Mellanox Company Data

Chart 43: Mellanox Storage OEM Customers PureStorage Infinidat Violin Memory X-IO Technologies Xyratex Seagate Xyratex Nimbus Data Nimbus Data Teradata Teradata Toshiba Toshiba Western Digital SGI Microsoft SMB Direct … Microsoft SMB Direct LSI Fujitsu Fujitsu Oracle Oracle Oracle NetApp NetApp NetApp HP HP HP Enterprise IBM IBM IBM (XIV, TMS) EMC EMC EMC DataDirect Networks DataDirect Networks DataDirect Networks

2011 2013 Today Source: Company Data page 39 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018 Customers The company’s 10% customer information – shown below – reflects its OEM relationships, with HPE and EMC contributing anywhere from 10-15% each quarter. Mellanox’s customer list includes many of the large OEMs in Server and Storage, including HPE, Dell/EMC, Fujitsu, Data Direct Networks, NetApp, Oracle, Teradata and Xyratex.

Chart 44: Mellanox 10% Customers (Q1’16 – Q2’18), (Revenue in millions)

Mar Jun Sep Dec Mar Jun Sep Dec Mar Jun Sep Dec Mar Jun Sep Dec Mar Jun Q1'14 Q2'14 Q3'14 Q4'14 Q1'15 Q2'15 Q3'15 Q4'15 Q1'16 Q2'16 Q3'16 Q4'16 Q1'17 Q2'17 Q3'17 Q4'17 Q1'18 Q2'18 10% Customers (Percent) IBM 7% 11% 11% 15% HP 7% 15% 11% 12% 15% 14% 14% 23% 12% 17% 13% 13% 14% 12% 13% 17% 13% Dell 13% 11% 11% 12% 13% 11% 11% 10% 14% Polyw ell Computer

10% Customers ($) IBM $6.9 $11.3 $13.3 $31.8 HP $6.9 $18.1 $15.5 $17.6 $24.5 $24.0 $24.8 $45.3 $25.8 $38.1 $24.5 $29.7 $27.1 $30.9 $42.7 $34.9 Dell/EMC $12.3 $13.3 $15.5 $21.2 $24.5 $23.3 $26.1 $25.1 $37.6 Polyw ell Computer

Source: Company Data

Web 2.0 and Cloud Ramping Mellanox Ethernet NICs in Volume Today Mellanox disclosed that it began to significantly penetrate the “Web 2.0” market in 2011 with “Cloud” companies adopting Mellanox 25G+ Ethernet NICs as well. Mellanox defines “Web 2.0” as a supercomputer or cluster of computers running a single application. Microsoft Bing, the Amazon.com website, and Facebook would all be examples. Conversely, it sees “Cloud” customers as using a super computer or cluster running multiple applications (Amazon AWS, Microsoft Azure, and hosting companies like Equinix, Rackspace, etc.). What these groups have in common is they spend a great deal on Data Center infrastructure.

We estimate that its Web 2.0 and Cloud customer segments each contribute approximately 15-20% of total company revenue. Based on company commentary, we believe that a significant portion of the Web 2.0 business is ICs. Of course, Web 2.0 companies often build their own infrastructure and customize it to run their very specific applications. Conversely, Cloud customers have to maintain an additional layer of flexibility and general purpose commonality given they need to run many, sometimes disparate applications for their end customers.

On the Cloud side, Mellanox has said that its solutions are widely adopted and it has some presence in approximately 90% of Cloud Data Center customers. Notably, it has an additional Ethernet design win at a large Public Cloud customer that it expects to generate revenue by the end of 2018. Mellanox has said that this customer group has historically been buying the company’s 10G Ethernet NICs but it has been getting new wins for its 25G and above products. There’s potential for growth from this group due to: 1) increased penetration; 2) upgrades to 25G/50G/100G NICs through 2018 and beyond; and 3) increased Mellanox Ethernet switch attach rate (at only 10-15% as of late 2017) to their Ethernet endpoints.

page 40 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018 Manufacturing Mellanox operates as a “fabless” semiconductor company. With third-party manufacturing, Mellanox can avoid the fixed costs of owning and running a fab and manage flexible capacity. It can also put more focus into designing and selling products. Mellanox uses Taiwan Semiconductor Manufacturing Company (TSMC) for its CMOS process ICs and STMicroelectronics for its BiCMOS process ICs. For assembly, packaging, and testing ICs it uses Advanced Semiconductor Engineering (ASE) and Amkor Technology Korea (Amkor). It uses Flextronics International (Flex) and Universal Scientific Industrial Co. (USI) to manufacture adapter card products and switch systems. It has a number of other sub-contractors for cables. Mellanox provides its contract manufacturers with a 6-month rolling forecast of requirements and it receives a monthly confirmation that third-parties can fulfil the requirements. It receives quarterly reports with lead-time, yields, and pricing for all of its products to continually optimize cost.

Competition We like the competitive environment around Mellanox’s business. At this point, we don’t see any particularly imposing competitive threats to its business.

On the InfiniBand side of the company (roughly 40% of sales), there’s not a lot of real competition. As stated earlier, Mellanox holds an estimated 85-90% of the overall InfiniBand market. Intel’s Omni-Path is a competitor however it has been unable to capture much share in InfiniBand. The product is technically inferior to Mellanox’s InfiniBand. We note that Omni-Path does run the InfiniBand protocol stack which is “skinnier” than the Ethernet stack and provides low latency. That said, it isn’t removing processing from CPUs. Of course, Intel doesn’t want to offload processing from Host CPUs such that it will diminish the number of CPUs it can sell into Data Center networks. We understand that Intel will sometimes try to compete on price with Mellanox’s InfiniBand. That’s a still difficult proposition for Intel. Any customer that’s looking holistically at the total network costs should realize that Mellanox’s InfiniBand is a much better solution – all else equal, it allows them to buy fewer Server/CPUs to process an equivalent workload. We’re quite certain that the trade-off of modestly higher InfiniBand costs (with Mellanox) versus buying more Intel CPUs (Servers) weighs in Mellanox’s favor.

On the Ethernet NIC side of the business, Mellanox holds the dominant market share for 25G and above NICs that provide CPU offload capability. Naturally, Intel doesn’t compete in this business at all (it does make traditional connectivity-only NICs which, of course, would not impinge on its ability to sell CPUs). Other competitors include Broadcom and Cavium (now Marvell). These suppliers hold relatively small market share – given the scale of their businesses, we presume they’re not particularly focused on the market for Ethernet NICs supporting offload capabilities. Given their significantly lower Ethernet NIC revenue run rates, we presume they’re devoting lower R&D resources vis-à-vis Mellanox as well. Also, they can’t leverage a core InfiniBand technology base the way that Mellanox can. Smaller players include SolarFlare, Silicom, and Cisco. Chart 45 shows the competitive environment for the Ethernet NIC market overall. Chart 46 outlines current market share for 25G and above NICs providing CPU offload capability.

page 41 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

Chart 45: Ethernet NIC Market Share (All Types & Speeds)

40.0% Q1’16 – Q2’18 (Revenue Share) Q2’18 (Revenue Share)

35.0% Other, 8.8% 30.0% Cisco, 2.6% Silicom, 2.7% 25.0% Solarflare, 3.3% Intel, 32.3%

20.0% Cavium, 10.4% 15.0%

10.0%

5.0% Broadcom, 13.3%

0.0% Q1'16 Q2'16 Q3'16 Q4'16 Q1'17 Q2'17 Q3'17 Q4'17 Q1'18 Q2'18 Mellanox, 26.6% Intel Mellanox Broadcom Cavium Solarflare Silicom Cisco Other

Source: IHS Markit, Jefferies Equity Research

Chart 46: Ethernet NIC Market Share (NICs that Support CPU Offload, 25G and Above Only)

80.0% Q1’16 – Q2’18 (Revenue Share) Q2’18 (Revenue Share)

70.0% Other, 13.8%

60.0% Cisco, 2.5%

50.0% Silicom, 3.2%

40.0% Solarflare, 2.7% Cavium, 4.6% 30.0%

20.0% Broadcom, 9.9%

10.0%

Mellanox, 63.3% 0.0% Q1'16 Q2'16 Q3'16 Q4'16 Q1'17 Q2'17 Q3'17 Q4'17 Q1'18 Q2'18

Intel Mellanox Broadcom Cavium Solarflare Silicom Cisco Other

Source: IHS Markit, Jefferies Equity Research

Omni-Path. Below, our analysis digs a bit deeper on Intel’s Omni-Path solution – the technology, at one point, was viewed as a significant potential competitor for Mellanox. We note that interconnect functions are repetitive and low complexity so tying up high- cost CPU resources is inefficient. However, our checks suggest that modern server chips have many cores and many software programs at lower tier HPC organizations are not parallelized and Hyper-Threaded enough to take advantage of all the cores. This leads to some of the CPU cores being underutilized. Hence, using underutilized cores for the interconnect process isn’t particularly detrimental to the overall cost or performance of the system. Therefore, it’s not surprising that Intel can be marginally successful pitching these lower tier HPC customers on the advantages of Omni-Path. That said, we expect that the most sophisticated and highest performance HPC customers will remain squarely in the InfiniBand camp. These customers are likely to have higher average CPU utilization

page 42 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

rates and appreciate the benefits of RDMA and Mellanox’s offload architecture. For example, Mellanox InfiniBand powers 4 of the top 5 systems in the Top 500 list while the first Omni-Path system isn’t in the top 10. As shown in Chart 47 below, Omni-Path adoption appears to have leveled off at 38-39 of the Top 500 systems. Mellanox management has recently noted that they’re seeing Omni-Path customers migrate back to their InfiniBand solutions.

As an aside, we point out that overall InfiniBand (all speeds) lost share of total systems. Our examination of the list shows a decrease from 164 to 140 systems. The InfiniBand share decline doesn’t provide the full picture for Mellanox, however. The company is providing interconnect for most, if not all, 25G+ Ethernet systems on the Top 500 list. Ethernet gained share which is representative of the commercial Hyperscalers (mostly Ethernet) getting included in the Top 500 list.

Chart 47: Top 500 Interconnect Trends

Source: Mellanox Company Data, Top500.org

Near Term Business Trends Likely Positive; Intel’s Purley Still Ramping Intel’s Purley platform based on Skylake is still ramping and expected to be fully ramped in the next few quarters. Skylake was a “Tock” (a microarchitecture upgrade) in Intel’s Tick-Tock pattern of chip introductions – so it’s an important introduction. However, our checks indicate that it’s scaling better than Haswell (higher percent of peak) by ~10%. Looking forward, we’re expecting the next significant server upgrade cycle to be driven by Intel’s introduction of Ice Lake (10nm die shrink) and Cooper Lake (microarchitecture upgrade) which the company has indicated are both slated for 2020. We expect Purley to boost Mellanox’s prospects in Q3 and potentially in Q4.

Risks Intel’s Product Cycles Can Add Volatility to the Model. Intel’s high share of the Data Center server CPU market can lead to risk for Mellanox if there are any supply chain or product cycle disruptions with new platforms. Mellanox has seen significant revenue growth for 14 of the last 15 years. However, there have been times when Intel’s Server product cycles have added significant volatility to the model. For example, in 2012, Mellanox saw a massive business uptick attributed to pent-up demand for Intel’s new 10G-capable Romley platform and customer inventory build. The situation led to page 43 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

artificially-high 2012 revenue followed by a significant drop in 2013. Its business trajectory could have been much smoother had Romley shipped on time and the inventory buildup not occurred. There have been no major Intel-related disruptions since the Romley launch, however we think it’s an important concept for investors to keep in mind given that Intel has 90%+ of the high-performance server CPU market. We note that Intel’s move to PCIe 4.0 has the potential to be an important upgrade. Of course, it hasn’t released a new PCIe generation since Romley. We expect that to take place in one of its next few product cycles.

Headline Risk with Competitor Announcements. We think there’s certainly headline risk associated with competitors announcing new products. Of course, Mellanox has >80% share for InfiniBand and >50% market share for 25G+ Ethernet NICs. Any new product announcements from competitors can certainly have an impact on sentiment around Mellanox’s ability to maintain its share.

For example, Intel has its next generation Omni-Path 2 scheduled to sample with customers in Q2’19 and reach general availability in Q4’19. Regardless of the strength of Intel’s strategy and Omni-Path’s long-term competitiveness, any announcements could raise investors’ fear gauge and weigh on sentiment. We note that while Intel was ramping the original Omni-Path in 2017, it weighed heavily on Mellanox shares (the forward PE multiple declined to below 10x).

We also think that investors don’t fully appreciate the strength of Mellanox’s competitive advantage versus other players in the market. As a result, any competitive announcements from Broadcom, Marvell/Cavium/QLogic, or Intel could be viewed as a detrimental.

Open Compute Project Efforts in Ethernet NICs. We see it as a potential risk if the Open Compute Project (OCP) effort pushes the market toward less differentiation and open sharing of designs for Ethernet NICs. Of course, OCP is a consortium of Hyperscalers, Telecom Service Providers, Enterprises, and Vendors collectively trying to move the industry toward open sharing of hardware, software, and protocols for use in Data Center infrastructure. OCP has an ongoing project for NICs it calls “Server Mezzanine.” Currently more of its work has focused on the board, hardware, form factor, and thermal design which still leaves room for product differentiation by hardware vendors. The risk would be if the OCP group pushes further to reduce the proprietary content of OCP NIC designs. There is also a project currently ongoing where members are working on a High-Performance Compute interconnect but it’s at a much earlier stage. Mitigating the risk, we think the current OCP environment still allows Mellanox to differentiate its products with its advanced features and offload architecture. OCP-aligned customers value Mellanox’s technology enough to maintain this approach, in our view.

Management Eyal Waldman, CEO – Eyal Waldman has been the company’s CEO and Director since March 1999. He is also a co-founder of Mellanox. From March 1999 to June 2013, Waldman served as Chairman of the Board. Previously, he served as VP, Engineering (and co-founder) at Galileo Technology (1993 to 1999). Galileo was subsequently acquired by Marvell in 2001. From 1989 to 1993, Mr. Waldman held a number of design- and architecture-related positions at Intel. He holds a BS in Electrical Engineering and a Masters in Electrical Engineering from the Technion – Israel Institute of Technology.

Michel Kagan, CTO – Michael Kagan is a co-founder of Mellanox and has served as CTO since January 2009. Previously, he served as VP, Architecture from 1999 to 2008. From 1983 to 1999, Mr. Kagan held a number of positions at Intel Corporation. Between page 44 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

1993 and 1996, Kagan managed Pentium MMX design. From 1996 to 1999, he managed the Architecture team of the Basic PC product group. Mr. Kagan holds a BS in Electrical Engineering from the Technion — Israel Institute of Technology.

Amir Prescher, SVP of Business Development – Amir Prescher has been Mellanox's SVP of Business Development since February 2013. Prior to joining Mellanox in 2011, he was a founder of Voltaire where he served in various capacities, including EVP Business Development from 2008 to 2011, VP of Business Development from 2001 to 2008, VP of Marketing from 1999 to 2008, and VP, R&D from 1997 to 1999. Prior to Voltaire, Prescher served as an officer in Israel’s Defense Forces Technical Intelligence Unit. He studied at Tel-Aviv University with a focus in Electronics Engineering.

Marc Sultzbaugh, SVP of Worldwide Sales – Marc Sultzbaugh has served as Mellanox's SVP of Worldwide Sales since December 2012. Previously, he operated as VP of Worldwide Sales from April 2007 to December 2012. He joined Mellanox in 2001 as Director of High-Performance Computing and Director of Central Area Sales. He was later promoted to Senior Director of Sales in October 2005. Prior to Mellanox, he held various Sales and Marketing positions with Brooktree Semiconductor. From 1985 to 1989, Sultzbaugh was an engineer at AT&T Microelectronics. He earned a BS in Electrical Engineering from The University of Missouri-Rolla, and an MBA from The University of California, Irvine.

Financials We Like the Activist Involvement – Keeps Them on Their Toes… In November 2017, activist investor Starboard Value announced a 9.8% stake in Mellanox. Its position ultimately got to 10.7% in early 2018. We like the activist involvement in the stock as we think it’s a good motivator – it certainly increases management focus on shareholder value. The crux of its position was that management had superior products, technology, and gross margin but underwhelming bottom-line profitability that underperformed peers. Starboard believed the company should cut its operating expense investments. Now that the Ethernet business has seen a strong ramp and still has significant momentum, there seems to be less of a “problem” for Starboard to solve. We think that was part of the impetus for the settlement Starboard and Mellanox reached in June 2018. Starboard ultimately agreed to halt its campaign while Mellanox agreed to appoint of two new board members from Starboard’s proposed list as well as one new independent director. Further, if Mellanox fails to hit the following operating margin targets, Starboard gets one additional board seat: A) 23.5% for the trailing 12 months ending December 2018; B) 25.5% for the trailing 12 months ending June 2019; and C) 28% for the trailing 12 months ending December 2019. The offset to the positive nature of Starboard’s involvement would be the point that it is satisfied with its return and begin to exit its position. Its exit could put pressure on the stock. Based on our review of ownership filings, Starboard has already pared its position slightly as of July-end (down to 8.65% of shares outstanding).

Conservative Expectation Setters – Not Much Volatility Around Earnings… As shown in Appendix 4, Mellanox management are conservative expectations setters. Notably, there’s not a lot of volatility around earnings. Most next day trading performances have been +/-3% after earnings announcements for the company. Further, the company rarely misses expectations. It has only missed Street expectations in 2 of the last 16 quarters on the top or bottom line. The company has beat revenue expectations 9 of the last 16 quarters and beaten EPS expectations 13 of the last 16 quarters. For guidance trends, it has raised guidance about half the time and cut guidance about half the time.

page 45 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

Appendix 1: Mellanox P&L Statement

Mellanox Technologies, Ltd. (Income Statement -- data in millions, except per share)

Mar Jun Sep Dec Mar Jun Sep Dec Mar Jun SepE DecE MarE JunE SepE DecE MarE JunE SepE DecE Q1'16 Q2'16 Q3'16 Q4'16 Q1'17 Q2'17 Q3'17 Q4'17 Q1'18 Q2'18 Q3'18 Q4'18 Q1'19 Q2'19 Q3'19 Q4'19 Q1'20 Q2'20 Q3'20 Q4'20 2016 2017 2018E 2019E 2020E Sales $196.8 $214.8 $224.2 $221.7 $188.7 $212.0 $225.7 $237.6 $251.0 $268.5 $276.7 $282.2 $285.4 $304.1 $325.7 $348.2 $349.7 $356.7 $363.0 $369.5 $857 $864 $1,078 $1,263 $1,439 Cost of sales 56.3 61.4 63.3 62.3 53.4 62.2 66.2 74.1 77.7 83.0 85.8 87.5 88.8 93.5 100.2 106.2 108.4 109.7 111.6 112.7 243.2 256.0 334.0 388.6 442.4 Research and Development 61.2 71.2 72.6 74.5 79.3 81.7 79.8 83.3 77.8 78.5 81.1 81.8 83.5 84.4 85.5 87.0 89.2 91.0 91.7 92.4 279.5 324.1 319.2 340.4 364.2 Sales and Marketing 26.5 26.3 28.3 29.5 30.1 31.9 31.2 32.5 33.5 29.9 31.3 31.3 30.7 31.2 31.8 32.2 33.2 35.7 36.3 36.9 110.7 125.7 126.0 125.8 142.1 General and Administrative 11.5 10.4 10.7 11.0 10.1 9.7 9.9 9.7 9.9 10.8 11.1 11.0 11.1 11.4 12.2 12.5 12.9 13.0 13.1 13.3 43.6 39.5 42.7 47.3 52.3 Operating Income 41.3 45.5 49.2 44.4 15.7 26.5 38.5 38.0 52.1 66.2 67.5 70.6 71.3 83.6 96.1 110.2 106.0 107.4 110.4 114.2 180.5 118.7 256.4 361.3 437.9

Interest Income and Other, Net 0.1 0.3 0.6 0.1 0.7 0.8 1.0 0.6 0.6 0.5 0.6 0.6 0.6 0.6 0.7 0.7 0.7 0.8 0.8 0.9 1.1 3.1 2.3 2.7 3.2 Interest Expense -1.0 -2.2 -2.2 -1.9 -2.0 -2.0 -2.0 -1.9 -1.2 -0.9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -7.4 -7.9 -2.0 0.0 0.0 Pretax Income 40.4 43.6 47.7 42.5 14.4 25.3 37.4 36.7 51.6 65.9 68.1 71.1 72.0 84.3 96.8 110.9 106.7 108.2 111.2 115.0 174.2 113.8 256.7 363.9 441.1

Taxes 1.1 0.9 1.5 1.2 -0.3 2.9 0.8 -6.2 0.2 -0.7 2.4 2.5 2.5 3.0 3.4 3.9 3.7 3.8 3.9 4.0 4.7 -2.7 4.4 12.7 15.4 Pro forma Net Income 39.3 42.7 46.2 41.3 14.7 22.4 36.6 42.9 51.4 66.6 65.7 68.7 69.4 81.3 93.4 107.0 103.0 104.4 107.3 111.0 169.5 116.6 252.3 351.2 425.7

Diluted EPS, Ex-Stock Comp. $0.81 $0.87 $0.93 $0.82 $0.29 $0.44 $0.71 $0.82 $0.98 $1.25 $1.23 $1.27 $1.28 $1.49 $1.69 $1.93 $1.84 $1.85 $1.88 $1.93 $3.43 $2.27 $4.73 $6.40 $7.50 Diluted EPS, Incl. Stock Comp. $0.43 $0.50 $0.58 $0.48 (0.00) $0.09 $0.35 $0.48 $0.69 $0.97 $0.87 $0.91 $0.91 $1.12 $1.32 $1.55 $1.46 $1.46 $1.49 $1.53 $1.99 $0.93 $3.45 $4.91 $5.95

Stock Comp expense 18.3 18.1 17.6 17.2 14.8 17.7 18.6 17.9 15.0 14.9 19.0 19.4 19.8 20.2 20.6 21.0 21.4 21.8 22.3 22.7 71.1 68.9 68.3 81.6 88.3

Diluted Sharecount 48.8 49.3 49.7 50.1 50.5 51.1 51.6 52.1 52.6 53.2 53.6 54.0 54.3 54.7 55.1 55.5 55.9 56.5 57.0 57.6 49.5 51.3 53.4 54.9 56.7

EBITDA 50.3 55.2 60.3 55.8 27.9 39.1 51.2 52.2 65.3 79.1 80.5 83.7 84.6 97.0 109.6 123.9 119.9 121.6 124.9 129.0 221.6 170.3 308.6 415.1 495.3

MARGIN ANALYSIS Gross Margin 71.4% 71.4% 71.8% 71.9% 71.7% 70.6% 70.7% 68.8% 69.0% 69.1% 69.0% 69.0% 68.9% 69.3% 69.3% 69.5% 69.0% 69.3% 69.3% 69.5% 71.6% 70.4% 69.0% 69.2% 69.3% Research & Development 31.1% 33.1% 32.4% 33.6% 42.0% 38.5% 35.4% 35.0% 31.0% 29.3% 29.3% 29.0% 29.3% 27.8% 26.3% 25.0% 25.5% 25.5% 25.3% 25.0% 32.6% 37.5% 29.6% 26.9% 25.3% Sales & Marketing 13.5% 12.2% 12.6% 13.3% 16.0% 15.0% 13.8% 13.7% 13.3% 11.2% 11.3% 11.1% 10.8% 10.3% 9.8% 9.3% 9.5% 10.0% 10.0% 10.0% 12.9% 14.6% 11.7% 10.0% 9.9% General & Administrative 5.8% 4.8% 4.8% 5.0% 5.4% 4.6% 4.4% 4.1% 3.9% 4.0% 4.0% 3.9% 3.9% 3.8% 3.8% 3.6% 3.7% 3.7% 3.6% 3.6% 5.1% 4.6% 4.0% 3.7% 3.6% Operating income 21.0% 21.2% 22.0% 20.0% 8.3% 12.5% 17.1% 16.0% 20.8% 24.7% 24.4% 25.0% 25.0% 27.5% 29.5% 31.7% 30.3% 30.1% 30.4% 30.9% 21.0% 13.7% 23.8% 28.6% 30.4% Tax Rate 2.7% 2.1% 3.1% 2.9% -2.0% 11.5% 2.2% -16.8% 0.4% -1.0% 3.5% 3.5% 3.5% 3.5% 3.5% 3.5% 3.5% 3.5% 3.5% 3.5% 2.7% -2.4% 1.7% 3.5% 3.5%

Y/Y Sales 34.2% 31.7% 30.8% 25.3% -4.1% -1.3% 0.7% 7.2% 33.0% 26.7% 22.6% 18.8% 13.7% 13.3% 17.7% 23.4% 22.5% 17.3% 11.5% 6.1% 30.3% 0.7% 24.8% 17.2% 13.9% Operating Income 137.0% 125.8% 133.3% 121.2% 38.0% 58.1% 78.2% 85.7% 332.2% 250.4% 175.4% 185.6% 136.9% 126.3% 142.3% 156.2% 148.5% 128.4% 114.9% 103.6% 29.0% -34.2% 116.1% 40.9% 21.2% Net income 37.2% 18.4% 27.3% 10.2% -62.7% -47.6% -20.7% 3.9% 250.5% 197.6% 79.4% 60.0% 35.2% 22.2% 42.1% 55.9% 48.3% 28.3% 14.9% 3.7% 22.4% -31.2% 116.5% 39.2% 21.2% EPS 33.5% 14.9% 23.7% 6.6% -63.9% -49.5% -23.8% 0.1% 235.9% 185.9% 72.9% 54.4% 30.9% 18.8% 38.2% 51.6% 44.1% 24.4% 11.0% -0.1% 18.8% -33.7% 108.2% 35.2% 17.3%

Q/Q Sales 11.2% 9.1% 4.4% -1.1% -14.9% 12.4% 6.5% 5.3% 5.6% 7.0% 3.1% 2.0% 1.1% 6.6% 7.1% 6.9% 0.4% 2.0% 1.8% 1.8%

BALANCE SHEET / OTHER DATA Cash and Investments $261.8 $276.5 $292.4 $328.4 $325.2 $310.3 $346.2 $281.8 $294.3 $290.6 Days Sales Outstanding 44 47 51 56 64 59 57 55 54 51 Days Inventory 109 102 91 93 120 108 92 78 79 90 Deferred Revenue $31.1 $36.0 $38.9 $40.3 $38.8 $38.3 $40.4 $41.3 $38.8 $38.5

Source: Company Data, Jefferies Research

page 46 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

Appendix 2: Mellanox Balance Sheet

Mellanox Technologies, Ltd. Balance Sheet (data in millions, except per share)

Mar Jun Sep Dec Mar Jun Sep Dec Mar Jun SepE DecE MarE JunE SepE DecE MarE JunE SepE DecE ASSETS Q1'16 Q2'16 Q3'16 Q4'16 Q1'17 Q2'17 Q3'17 Q4'17 Q1'18 Q2'18 Q3'18 Q4'18 Q1'19 Q2'19 Q3'19 Q4'19 Q1'20 Q2'20 Q3'20 Q4'20 Cash and Equivalents 117.9 63.5 55.5 56.8 58.4 55.7 58.4 70.5 98.6 71.4 135.1 204.7 271.6 339.8 418.2 508.8 610.1 709.5 808.8 911.3 Short-Term Investments in Marketable Securities 143.9 213.0 236.9 271.7 266.8 254.5 287.7 211.3 195.7 219.2 219.2 219.2 219.2 219.2 219.2 219.2 219.2 219.2 219.2 219.2 Restricted Cash 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Accounts Receivable, net 105.1 117.3 133.1 139.9 126.2 149.5 133.6 154.2 142.9 155.7 169.1 172.5 174.4 185.9 199.0 212.8 213.7 218.0 221.9 225.8 Inventories 72.3 64.7 61.9 65.5 75.3 72.0 61.6 64.7 69.9 94.5 85.8 82.6 83.8 88.3 94.6 100.3 102.4 103.6 105.4 106.4 Deferred Income Taxes and Other Current Assets 22.9 20.5 20.0 17.3 23.0 20.7 20.2 14.3 15.6 12.0 12.0 12.0 12.0 12.0 12.0 12.0 12.0 12.0 12.0 12.0 Total Current Assets 462.1 478.9 507.5 551.2 549.8 552.5 561.6 514.9 522.6 552.7 621.2 691.0 761.1 845.3 943.0 1,053.1 1,157.4 1,262.4 1,367.3 1,474.7

Property and Equipment, net 103.2 112.3 113.6 118.6 123.9 121.2 117.5 109.9 110.1 106.7 107.0 107.8 109.1 111.0 113.6 116.8 120.6 125.0 130.1 135.8 Severance Assets 15.8 15.8 16.4 15.9 16.9 17.8 17.8 18.3 18.0 17.1 17.1 17.1 17.1 17.1 17.1 17.1 17.1 17.1 17.1 17.1 Intangible Assets, net 308.7 292.8 288.7 278.0 267.4 253.4 247.4 228.2 218.7 208.5 206.4 204.3 202.3 200.2 198.2 196.3 194.3 192.3 190.4 188.5 Goodw ill 476.0 476.0 476.0 471.2 471.2 471.2 472.4 472.4 472.4 473.9 473.9 473.9 473.9 473.9 473.9 473.9 473.9 473.9 473.9 473.9 Deferred Taxes and Other Assets 32.6 31.8 32.8 36.7 49.9 50.5 53.9 58.1 86.4 90.8 83.0 84.7 85.6 91.2 97.7 104.5 104.9 107.0 108.9 110.8 Total Assets 1,398.5 1,407.6 1,435.0 1,471.7 1,479.1 1,466.7 1,470.7 1,401.9 1,428.4 1,449.8 1,508.6 1,578.7 1,649.1 1,738.8 1,843.6 1,961.6 2,068.2 2,177.8 2,287.7 2,400.9

LIABILITIES Accounts Payable 47.6 55.2 50.5 57.7 63.5 58.8 42.1 59.1 64.5 71.1 66.7 68.0 69.0 72.7 77.9 82.6 84.3 85.3 86.8 87.6 Accrued Liabilities 106.7 87.6 90.4 105.0 107.1 94.4 91.5 114.1 107.7 124.8 120.3 121.1 122.2 123.8 126.2 128.5 131.9 136.2 137.5 139.1 Deferred Revenues 18.9 21.8 23.4 24.4 23.4 23.0 23.3 23.5 20.2 20.7 22.1 22.6 22.8 24.3 26.1 27.9 28.0 28.5 29.0 29.6 Capital Lease Obligations 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Current Portion of Debt 25.9 29.5 20.1 23.6 14.2 21.8 23.3 0.0 34.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Total Current Liabilities 199.4 194.1 184.4 210.7 208.2 198.0 180.2 196.6 226.7 216.6 209.2 211.7 214.0 220.9 230.2 238.9 244.2 250.0 253.4 256.3

Accrued Severance 19.6 20.2 20.6 19.9 21.0 23.0 22.8 23.2 22.8 21.5 21.5 21.5 21.5 21.5 21.5 21.5 21.5 21.5 21.5 21.5 Deferred Revenues 12.2 14.2 15.5 16.0 15.5 15.2 17.1 17.8 18.6 17.8 19.4 19.8 20.0 21.3 22.8 24.4 24.5 25.0 25.4 25.9 Capital Lease Obligations 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Long Term Debt 248.9 238.9 228.9 218.8 208.7 191.6 174.4 72.8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Other Long Term Liabilities 26.5 26.2 29.4 30.6 31.5 33.7 39.1 34.1 33.1 32.1 33.2 33.9 34.2 36.5 39.1 41.8 42.0 42.8 43.6 44.3 Total Long-Term liabilities 307.3 299.5 294.3 285.2 276.6 263.6 253.5 147.9 74.5 71.4 74.0 75.1 75.7 79.3 83.3 87.6 87.9 89.2 90.4 91.7

Shareholders Equity 892 914 956 976 994 1,005 1,037 1,057 1,127 1,162 1,225 1,292 1,359 1,439 1,530 1,635 1,736 1,839 1,944 2,053 Total Liabilities and Shareholders Equity 1,398.5 1,407.6 1,435.0 1,471.7 1,479.1 1,466.7 1,470.7 1,401.9 1,428.4 1,449.8 1,508.6 1,578.7 1,649.1 1,738.8 1,843.6 1,961.6 2,068.2 2,177.8 2,287.7 2,400.9

LIQUIDITY Current ratio 2.3 2.5 2.8 2.6 2.6 2.8 3.1 2.6 2.3 2.6 3.0 3.3 3.6 3.8 4.1 4.4 4.7 5.0 5.4 5.8 Quick ratio 2.0 2.1 2.4 2.3 2.3 2.4 2.8 2.3 2.0 2.1 2.6 2.9 3.2 3.4 3.7 4.0 4.3 4.6 5.0 5.3

SOLVENCY Debt to capital 2.9% 2.8% 3.0% 3.0% 3.1% 3.2% 3.6% 3.1% 2.9% 2.7% 2.6% 2.6% 2.5% 2.5% 2.5% 2.5% 2.4% 2.3% 2.2% 2.1% Debt to assets 1.9% 1.9% 2.0% 2.1% 2.1% 2.3% 2.7% 2.4% 2.3% 2.2% 2.2% 2.1% 2.1% 2.1% 2.1% 2.1% 2.0% 2.0% 1.9% 1.8% Return on equity 13.0% 13.1% 13.7% 13.4% 10.1% 8.7% 7.9% 8.1% 10.5% 13.5% 15.2% 16.9% 17.6% 17.9% 18.7% 19.8% 20.7% 20.8% 20.4% 19.5%

ASSET UTILIZATION Days sales outstanding 44 47 51 56 64 59 57 55 54 51 55 55 55 55 55 55 55 55 55 55 Days inventory 109 102 91 93 120 108 92 78 79 90 90 85 85 85 85 85 85 85 85 85 Days payables 75 76 76 79 103 89 69 62 72 74 70 70 70 70 70 70 70 70 70 70 Sales to current assets 0.35 0.46 0.45 0.42 0.34 0.38 0.41 0.44 0.48 0.50 Sales to w orking capital 0.93 1.17 0.92 0.80 0.67 0.73 0.73 0.83 1.13 1.16 Assets to equity 1.57 1.54 1.50 1.51 1.49 1.46 1.42 1.33 1.27 1.25 Sales to total assets 0.16 0.15 0.16 0.15 0.13 0.14 0.15 0.17 0.18 0.19 Book value per share $18.27 $18.55 $19.26 $19.46 $19.71 $19.66 $20.08 $20.31 $21.41 $21.83 Tangible Book value per share $2.19 $2.95 $3.86 $4.52 $5.07 $5.48 $6.14 $6.85 $8.28 $9.01 Cash per share $5.36 $5.61 $5.89 $6.55 $6.45 $6.07 $6.70 $5.41 $5.59 $5.46

Source: Company Data, Jefferies Research

page 47 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

Appendix 3: Mellanox Cash Flow Statement

Mellanox Technologies, Ltd. (Cash Flow Model - data in millions)

Mar Jun Sep Dec Mar Jun SepE DecE MarE JunE SepE DecE MarE JunE SepE DecE OPERATING ACTIVITIES Q1'17 Q2'17 Q3'17 Q4'17 Q1'18 Q2'18 Q3'18 Q4'18 Q1'19 Q2'19 Q3'19 Q4'19 Q1'20 Q2'20 Q3'20 Q4'20 2016 2017 2018E 2019E 2020E Net Income -12.2 -8.0 3.4 -2.6 37.8 16.5 65.7 68.7 69.4 81.3 93.4 107.0 103.0 104.4 107.3 111.0 18.5 -19.4 188.7 351.2 425.7 Adjustments: Depreciation and Amortization 25.2 25.6 25.8 27.3 26.4 26.2 13.0 13.1 13.3 13.4 13.5 13.7 13.9 14.2 14.5 14.8 97.7 103.8 78.8 53.8 57.4 Deferred Income Taxes -0.9 0.2 0.0 -1.4 -26.8 -1.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.8 -2.2 -28.1 0.0 0.0 Share-based Compensation 14.8 17.7 18.6 17.9 15.0 14.9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 66.3 68.9 29.9 0.0 0.0 Impairment of / Gain on Sale on Investments -0.9 -0.8 -0.9 -0.8 -0.9 -0.9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -1.8 -3.5 -1.8 0.0 0.0 Excess Tax Benefits from Stock-Based Compensation 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Other 0.0 0.0 0.0 12.0 0.1 1.4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 12.0 1.6 0.0 0.0 Changes in Operating Assets and Liabilities: Accounts Receivable, Net 15.5 -23.3 16.2 -20.6 11.3 -12.8 -13.4 -3.4 -1.9 -11.5 -13.2 -13.7 -0.9 -4.3 -3.8 -3.9 -39.5 -12.2 -18.3 -40.3 -13.0 Inventories -10.5 2.8 10.0 -3.2 -5.7 -24.9 8.7 3.2 -1.2 -4.5 -6.3 -5.7 -2.1 -1.2 -1.8 -1.0 8.3 -0.9 -18.7 -17.7 -6.1 Prepaid Expenses and Other Current Assets -3.7 1.0 -3.1 5.1 -1.3 2.1 7.8 -1.7 -1.0 -5.6 -6.5 -6.7 -0.5 -2.1 -1.9 -1.9 6.9 -0.7 6.9 -19.8 -6.4 Accounts Payable 4.9 -4.9 -14.9 15.0 3.9 8.6 -4.4 1.3 1.0 3.7 5.2 4.7 1.7 1.0 1.5 0.8 11.5 0.2 9.4 14.5 5.1 Accrued Liabilities and Other Payables 2.8 -3.9 -2.0 18.3 -4.5 16.8 -0.4 2.2 1.9 6.7 8.3 8.3 3.9 6.1 3.0 3.3 27.3 15.2 14.2 25.2 16.3 Net Cash Provided by Operating Activities 35.0 6.4 53.0 66.9 55.4 46.7 77.0 83.5 81.5 83.6 94.4 107.5 119.0 118.1 118.8 123.0 196.1 161.3 262.6 367.1 478.9

INVESTING ACTIVITIES Cash Paid for Acquisitions, Net of Cash Acquired 0.0 0.0 -0.9 0.0 0.0 -7.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -693.7 -0.9 -7.1 0.0 0.0 Purchases of Severance-Related Insurance Policies -0.3 -0.3 -0.3 -0.3 -0.3 -0.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -1.2 -1.3 -0.6 0.0 0.0 Purchases of Short-Term Investments -50.3 -18.8 -70.8 -48.9 -20.9 -61.6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -300.9 -188.7 -82.5 0.0 0.0 Proceeds from Sale of Short Term Investments 54.2 20.1 21.1 97.7 8.9 5.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 237.8 193.1 13.9 0.0 0.0 Proceeds from Maturities of Short Term Investments 1.8 11.8 17.4 28.1 28.1 34.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 149.7 59.1 62.4 0.0 0.0 Increase in Restricted Cash Deposits 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Proceeds from Sale of Property and Equipment 0.0 0.0 0.0 0.0 0.0 3.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.2 0.0 0.0 Purchases of Property and Equipment -15.9 -11.2 -8.1 -6.1 -7.2 -12.9 -13.2 -13.9 -14.6 -15.3 -16.1 -16.9 -17.7 -18.6 -19.6 -20.5 -43.0 -41.4 -47.2 -62.9 -76.5 Purchases of Intangibles -1.1 -0.5 -0.2 -1.0 -6.3 -0.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -8.0 -2.8 -6.4 0.0 0.0 Purchases of Equity Investment in Private Company -11.0 0.0 -2.5 -1.5 -2.5 -3.5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -5.0 -15.0 -6.0 0.0 0.0 Net Cash Used in Investing Activities -22.6 1.0 -44.3 67.9 -0.2 -43.0 -13.2 -13.9 -14.6 -15.3 -16.1 -16.9 -17.7 -18.6 -19.6 -20.5 -664.2 2.0 -70.3 -62.9 -76.5

FINANCING ACTIVITIES Proceeds from Term Debt, Net of Cost 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 274.5 0.0 0.0 0.0 0.0 Repayment of Debt -20.0 -10.0 -16.0 -126.0 -39.0 -35.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -34.0 -172.0 -74.0 0.0 0.0 Proceeds from Public Offering 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Principal Payments on Capital Lease Obligations -2.5 -0.7 -2.7 -1.4 -2.2 -1.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -1.4 -7.4 -3.4 0.0 0.0 Proceeds from Exercise of Share Aw ards 11.7 0.7 12.7 4.6 14.1 5.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 22.6 29.7 19.3 0.0 0.0 Excess Tax Benefits from Stock-Based Compensation 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Net Cash Provided by Financing Activities -10.8 -10.0 -6.0 -122.8 -27.1 -31.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 261.6 -149.6 -58.1 0.0 0.0

Net Increase in Cash and Cash Equivalents 1.6 -2.6 2.7 12.1 28.1 -27.2 63.7 69.6 67.0 68.2 78.3 90.6 101.3 99.5 99.2 102.5 -206.4 13.7 134.2 304.2 402.5

Cash and Cash Equivalents at Beginning of Period 56.8 58.4 55.7 58.4 70.5 98.6 71.4 135.1 204.7 271.6 339.8 418.2 508.8 610.1 709.6 808.8 263.2 56.8 70.5 204.7 508.8 Cash and Cash Equivalents at End of Period 58.4 55.7 58.4 70.5 98.6 71.4 135.1 204.7 271.6 339.8 418.2 508.8 610.1 709.6 808.8 911.3 56.8 70.5 204.7 508.8 911.3

Free Cash Flow 19.1 -4.8 44.9 60.8 48.2 33.9 63.7 69.6 67.0 68.2 78.3 90.6 101.3 99.5 99.2 102.5 153.1 119.9 215.3 304.2 402.5

Source: Company Data, Jefferies Research

page 48 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

Appendix 4: Mellanox Performance vs. Expectations

Mellanox Technologies, Ltd. (Quarterly Revenue/EPS Performance vs. Guidance/Consensus)

Sep Dec Mar Jun Sep Dec Mar Jun Sep Dec Mar Jun Sep Dec Mar Jun Sep Q3'14 Q4'14 Q1'15 Q2'15 Q3'15 Q4'15 Q1'16 Q2'16 Q3'16 Q4'16 Q1'17 Q2'17 Q3'17 Q4'17 Q1'18 Q2'18 Q3'18 Average Summary Guidance Revs (millions of $) 114-118 132-138 140-145 155-160 165-170 171-176 190-195 210-215 221-227 222-228 200-210 205-215 222-232 230-240 222-232 255-265 270-280 Midpoint 116.0 135.0 147.5 157.5 167.5 173.5 192.5 212.5 224.0 225.0 205.0 210.0 227.0 235.0 227.0 260.0 275.0

Consensus Revs (millions of $) 116.5 135.2 142.7 157.8 168.8 174.0 192.5 212.7 224.2 225.0 204.7 209.6 226.7 235.4 244.9 268.4 275.4 Consensus EPS (Non-GAAP) $0.26 $0.44 $0.48 $0.62 $0.71 $0.70 $0.75 $0.83 $0.92 $0.86 $0.49 $0.44 $0.64 $0.68 $0.85 $1.09 $1.19

Printed Revs 120.7 141.1 146.7 163.1 171.4 176.9 196.8 214.8 224.2 221.7 188.7 212.0 225.7 237.6 251.0 268.5 Printed EPS $0.38 $0.59 $0.60 $0.75 $0.75 $0.77 $0.81 $0.87 $0.93 $0.82 $0.29 $0.44 $0.71 $0.82 $0.98 $1.25 Beat In-Line Miss Printed Revs +/- Street 4% 4% 3% 3% 2% 2% 2% 1% 0% -1% -8% 1% 0% 1% 2% 0% 1% 9 of 16 5 of 16 2 of 16 Printed EPS +/- Street $0.12 $0.15 $0.12 $0.13 $0.04 $0.07 $0.06 $0.04 $0.01 ($0.04) ($0.20) ($0.00) $0.07 $0.14 $0.13 $0.16 13 of 16 1 of 16 2 of 16 Quarter Characterization beat beat beat beat beat beat beat beat in-line miss miss beat beat beat beat beat 13 of 16 1 of 16 2 of 16

Printed Revs +/- Guidance 4% 5% -1% 4% 2% 2% 2% 1% 0% -1% -8% 1% -1% 1% 11% 3% 2%

GUIDANCE Raise In-Line Cut Subsequent Quarter Revenue Guidance 135.0 147.5 157.5 167.5 173.5 192.5 212.5 224.0 225.0 205.0 210.0 227.0 235.0 227.0 260.0 275.0 Subsequent Quarter Revenue Consensus 129.5 135.0 149.2 163.4 172.2 171.7 216.8 226.9 233.8 225.7 222.4 233.6 239.3 220.1 249.6 270 Subsequent Quarter Revenue Guidance vs. Consensus Differential4% 9% 6% 3% 1% 12% -2% -1% -4% -9% -6% -3% -2% 3% 4% 2% 1% 8 of 16 1 of 16 7 of 16

Stock Up Stock Down EPS Date 10/22/2014 1/28/2015 4/21/2015 7/22/2015 10/21/2015 1/27/2016 4/20/2016 7/20/2016 10/27/2016 2/1/2017 4/26/2017 7/26/2017 10/25/2017 1/18/2018 4/17/2018 7/17/2018 Next Trading Day Post EPS Stock Performance (1 day) -1.3% 2.5% 2.1% 2.1% 5.8% 12.0% -12.1% -9.3% 1.4% -6.1% -8.5% 2.2% -0.1% 1.0% -0.7% -1.3% -1% 8 of 16 8 of 16

Beat & Miss & Cut Raise Quarters Quarters Characterization of the Quarter/Guidance beat/raise beat/raise beat/raise beat/raise beat/raise beat/raise beat/cut beat/cut in-line/cut miss/cut miss/cut beat/cut beat/cut beat/raise beat/raise beat/raise 9 of 16 2 of 16

Source: Company Data, Jefferies Research

page 49 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.

MLNX

Initiating Coverage

October 2, 2018

Company Description

Mellanox Technologies Mellanox, based in Yokneam, Israel, and Sunnyvale, CA, is a leading supplier of high performance Interconnect solutions for Data Center networks. Their products – sold as Circuit Boards, Integrated Circuits (ICs), Cables, and Switches – are used to connect Servers, Storage, and Networking devices together in Data Centers. Historically, Mellanox has been leveraged to the High Performance Computing (HPC) / supercomputer market – primarily with their InfiniBand products. The company is the market-share leader in that space with ~85% market share of InfiniBand Host Channel Adaptors worldwide. With a new product initiative that began in 2013, Mellanox has emerged as a leader in Ethernet products as well. They are now the number 1 provider of Ethernet Network Interface Cards at 25G speeds and higher. From a customer perspective, the HPC space now contributes less than 40% of sales with the balance coming from Storage, Web 2.0/Cloud companies, and traditional enterprises. Mellanox acquired privately-held Kotura and IPtronics in 2016 and public company EZchip in 2016, the components from which provide Mellanox with System-on-a-Chip (SOC) and Silicon Photonics (SiP) technology. The company generated 2017 revenue of $864 million and employed 2,448 people at year-end 2017. Company Valuation/Risks

Mellanox Technologies The business currently trades for 9.8x our 2020 non-GAAP EPS projection (10.8x Street consensus). Our still-conservative $110 PT works out to 14.7x our 2020 EPS estimate, or parity to the company’s historical forward PE multiple. We expect that investors’ willingness to pay higher multiples will improve as the business becomes more predictable, Intel Omni-Path concerns fade, and the company drives significant cash flow. Primary risks include: 1) delays in Intel’s Server CPU product cycles; 2) headline risk from competitive product announcements; and 3) volatility in ICP capex spending.

For Important Disclosure information on companies recommended in this report, please visit our website at https://javatar.bluematrix.com/sellside/ Disclosures.action or call 212.284.2300. Analyst Certification: I, George C. Notter, certify that all of the views expressed in this research report accurately reflect my personal views about the subject security(ies) and subject company(ies). I also certify that no part of my compensation was, is, or will be, directly or indirectly, related to the specific recommendations or views expressed in this research report. I, Kyle McNealy, certify that all of the views expressed in this research report accurately reflect my personal views about the subject security(ies) and subject company(ies). I also certify that no part of my compensation was, is, or will be, directly or indirectly, related to the specific recommendations or views expressed in this research report. I, Steven Sarver, certify that all of the views expressed in this research report accurately reflect my personal views about the subject security(ies) and subject company(ies). I also certify that no part of my compensation was, is, or will be, directly or indirectly, related to the specific recommendations or views expressed in this research report. As is the case with all Jefferies employees, the analyst(s) responsible for the coverage of the financial instruments discussed in this report receives compensation based in part on the overall performance of the firm, including investment banking income. We seek to update our research as appropriate, but various regulations may prevent us from doing so. Aside from certain industry reports published on a periodic basis, the large majority of reports are published at irregular intervals as appropriate in the analyst's judgement. Investment Recommendation Record (Article 3(1)e and Article 7 of MAR) Recommendation Published October 2, 2018 , 01:18 ET. Recommendation Distributed October 2, 2018 , 01:25 ET. Company Specific Disclosures Jefferies Group LLC makes a market in the securities or ADRs of Mellanox Technologies, Ltd. Jefferies Group LLC makes a market in the securities or ADRs of Broadcom. Jefferies Group LLC makes a market in the securities or ADRs of Cisco Systems, Inc. Jefferies Group LLC makes a market in the securities or ADRs of Intel Corporation. Jefferies Group LLC makes a market in the securities or ADRs of Juniper, Inc. Jefferies Group LLC makes a market in the securities or ADRs of Marvell Technology Group Ltd. Within the past twelve months, Jefferies LLC and/or its affiliates received compensation for products and services other than investment banking services from non-investment banking, securities related compensation for client services it provided to Cisco Systems, Inc.. Within the past twelve months, Jefferies LLC and/or its affiliates received compensation for products and services other than investment banking services from non-investment banking, securities related compensation for client services it provided to Intel Corporation. Within the past twelve months, Jefferies LLC and/or its affiliates received compensation for products and services other than investment banking services from non-investment banking, securities related compensation for client services it provided to Marvell Technology Group Ltd.. page 50 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report. MLNX

Initiating Coverage

October 2, 2018

For Important Disclosure information on companies recommended in this report, please visit our website at https://javatar.bluematrix.com/sellside/ Disclosures.action or call 212.284.2300.

Explanation of Jefferies Ratings Buy - Describes securities that we expect to provide a total return (price appreciation plus yield) of 15% or more within a 12-month period. Hold - Describes securities that we expect to provide a total return (price appreciation plus yield) of plus 15% or minus 10% within a 12-month period. Underperform - Describes securities that we expect to provide a total return (price appreciation plus yield) of minus 10% or less within a 12-month period. The expected total return (price appreciation plus yield) for Buy rated securities with an average security price consistently below $10 is 20% or more within a 12-month period as these companies are typically more volatile than the overall stock market. For Hold rated securities with an average security price consistently below $10, the expected total return (price appreciation plus yield) is plus or minus 20% within a 12-month period. For Underperform rated securities with an average security price consistently below $10, the expected total return (price appreciation plus yield) is minus 20% or less within a 12-month period. NR - The investment rating and price target have been temporarily suspended. Such suspensions are in compliance with applicable regulations and/ or Jefferies policies. CS - Coverage Suspended. Jefferies has suspended coverage of this company. NC - Not covered. Jefferies does not cover this company. Restricted - Describes issuers where, in conjunction with Jefferies engagement in certain transactions, company policy or applicable securities regulations prohibit certain types of communications, including investment recommendations. Monitor - Describes securities whose company fundamentals and financials are being monitored, and for which no financial projections or opinions on the investment merits of the company are provided. Valuation Methodology Jefferies' methodology for assigning ratings may include the following: market capitalization, maturity, growth/value, volatility and expected total return over the next 12 months. The price targets are based on several methodologies, which may include, but are not restricted to, analyses of market risk, growth rate, revenue stream, discounted cash flow (DCF), EBITDA, EPS, cash flow (CF), free cash flow (FCF), EV/EBITDA, P/E, PE/growth, P/CF, P/FCF, premium (discount)/average group EV/EBITDA, premium (discount)/average group P/E, sum of the parts, net asset value, dividend returns, and return on equity (ROE) over the next 12 months.

Jefferies Franchise Picks Jefferies Franchise Picks include stock selections from among the best stock ideas from our equity analysts over a 12 month period. Stock selection is based on fundamental analysis and may take into account other factors such as analyst conviction, differentiated analysis, a favorable risk/reward ratio and investment themes that Jefferies analysts are recommending. Jefferies Franchise Picks will include only Buy rated stocks and the number can vary depending on analyst recommendations for inclusion. Stocks will be added as new opportunities arise and removed when the reason for inclusion changes, the stock has met its desired return, if it is no longer rated Buy and/or if it triggers a stop loss. Stocks having 120 day volatility in the bottom quartile of S&P stocks will continue to have a 15% stop loss, and the remainder will have a 20% stop. Franchise Picks are not intended to represent a recommended portfolio of stocks and is not sector based, but we may note where we believe a Pick falls within an investment style such as growth or value.

Risks which may impede the achievement of our Price Target This report was prepared for general circulation and does not provide investment recommendations specific to individual investors. As such, the financial instruments discussed in this report may not be suitable for all investors and investors must make their own investment decisions based upon their specific investment objectives and financial situation utilizing their own financial advisors as they deem necessary. Past performance of the financial instruments recommended in this report should not be taken as an indication or guarantee of future results. The price, value of, and income from, any of the financial instruments mentioned in this report can rise as well as fall and may be affected by changes in economic, financial and political factors. If a financial instrument is denominated in a currency other than the investor's home currency, a change in exchange rates may adversely affect the price of, value of, or income derived from the financial instrument described in this report. In addition, investors in securities such as ADRs, whose values are affected by the currency of the underlying security, effectively assume currency risk. Other Companies Mentioned in This Report • Arista Networks, Inc. (ANET: $259.64, HOLD) • Broadcom (AVGO: $249.51, BUY) • Cisco Systems, Inc. (CSCO: $48.87, BUY) • Intel Corporation (INTC: $46.45, UNDERPERFORM) • Juniper, Inc. (JNPR: $29.92, HOLD) • Marvell Technology Group Ltd. (MRVL: $19.27, BUY)

page 51 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report. MLNX

Initiating Coverage

October 2, 2018

Notes: Each box in the Rating and Price Target History chart above represents actions over the past three years in which an analyst initiated on a company, made a change to a rating or price target of a company or discontinued coverage of a company. Legend: I: Initiating Coverage D: Dropped Coverage B: Buy H: Hold UP: Underperform For Important Disclosure information on companies recommended in this report, please visit our website at https://javatar.bluematrix.com/sellside/ Disclosures.action or call 212.284.2300. Distribution of Ratings IB Serv./Past 12 Mos. JIL Mkt Serv./Past 12 Mos. Rating Count Percent Count Percent Count Percent BUY 1146 54.55% 89 7.77% 14 1.22% HOLD 836 39.79% 15 1.79% 1 0.12% UNDERPERFORM 119 5.66% 0 0.00% 0 0.00%

page 52 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report. MLNX

Initiating Coverage

October 2, 2018

Other Important Disclosures Jefferies does and seeks to do business with companies covered in its research reports. As a result, investors should be aware that Jefferies may have a conflict of interest that could affect the objectivity of this report. Investors should consider this report as only a single factor in making their investment decision. Jefferies Equity Research refers to research reports produced by analysts employed by one of the following Jefferies Group LLC (“Jefferies”) group companies: : Jefferies LLC which is an SEC registered broker-dealer and a member of FINRA (and distributed by Jefferies Research Services, LLC, an SEC registered Investment Adviser, to clients paying separately for such research). United Kingdom: Jefferies International Limited, which is authorized and regulated by the Financial Conduct Authority; registered in England and Wales No. 1978621; registered office: Vintners Place, 68 Upper Thames Street, London EC4V 3BJ; telephone +44 (0)20 7029 8000; facsimile +44 (0)20 7029 8010. Hong Kong: Jefferies Hong Kong Limited, which is licensed by the Securities and Futures Commission of Hong Kong with CE number ATS546; located at Suite 2201, 22nd Floor, Cheung Kong Center, 2 Queen’s Road Central, Hong Kong. Singapore: Jefferies Singapore Limited, which is licensed by the Monetary Authority of Singapore; located at 80 Raffles Place #15-20, UOB Plaza 2, Singapore 048624, telephone: +65 6551 3950. Japan: Jefferies (Japan) Limited, Tokyo Branch, which is a securities company registered by the Financial Services Agency of Japan and is a member of the Japan Securities Dealers Association; located at Hibiya Marine Bldg, 3F, 1-5-1 Yuraku-cho, Chiyoda-ku, Tokyo 100-0006; telephone +813 5251 6100; facsimile +813 5251 6101. India: Jefferies India Private Limited (CIN - U74140MH2007PTC200509), which is licensed by the Securities and Exchange Board of India as a Merchant Banker (INM000011443), Research Analyst (INH000000701) and a Stock Broker with Bombay Stock Exchange Limited (INB011491033) and National Stock Exchange of India Limited (INB231491037) in the Capital Market Segment; located at 42/43, 2 North Avenue, Maker Maxity, Bandra-Kurla Complex, Bandra (East) Mumbai 400 051, India; Tel +91 22 4356 6000. This report was prepared by personnel who are associated with Jefferies (Jefferies International Limited, Jefferies Hong Kong Limited, Jefferies Singapore Limited, Jefferies (Japan) Limited, Jefferies India Private Limited); or by personnel who are associated with both Jefferies LLC and Jefferies Research Services LLC (“JRS”). Jefferies LLC is a US registered broker-dealer and is affiliated with JRS, which is a US registered investment adviser. JRS does not create tailored or personalized research and all research provided by JRS is impersonal. If you are paying separately for this research, it is being provided to you by JRS. Otherwise, it is being provided by Jefferies LLC. Jefferies LLC, JRS, and their affiliates are collectively referred to below as “Jefferies”. Jefferies may seek to do business with companies covered in this research report. As a result, investors should be aware that Jefferies may have a conflict of interest that could affect the objectivity of this report. Investors should consider this report as only one of many factors in making their investment decisions. Specific conflict of interest and other disclosures that are required by FINRA and other rules are set forth in this disclosure section. * * * If you are receiving this report from a non-US Jefferies entity, please note the following: Unless prohibited by the provisions of Regulation S of the U.S. Securities Act of 1933, as amended, this material is distributed in the United States by Jefferies LLC, which accepts responsibility for its contents in accordance with the provisions of Rule 15a-6 under the US Securities Exchange Act of 1934, as amended. Transactions by or on behalf of any US person may only be effected through Jefferies LLC. In the United Kingdom and European Economic Area this report is issued and/or approved for distribution by Jefferies International Limited (“JIL”) and is intended for use only by persons who have, or have been assessed as having, suitable professional experience and expertise, or by persons to whom it can be otherwise lawfully distributed. JIL allows its analysts to undertake private consultancy work. JIL’s conflicts management policy sets out the arrangements JIL employs to manage any potential conflicts of interest that may arise as a result of such consultancy work. Jefferies LLC, JIL and their affiliates, may make a market or provide liquidity in the financial instruments referred to in this report; and where they do make a market, such activity is disclosed specifically in this report under “company specific disclosures”. For Canadian investors, this material is intended for use only by professional or institutional investors. None of the investments or investment services mentioned or described herein is available to other persons or to anyone in Canada who is not a "Designated Institution" as defined by the Securities Act (Ontario). In Singapore, Jefferies Singapore Limited (“JSL”) is regulated by the Monetary Authority of Singapore. For investors in the Republic of Singapore, this material is provided by JSL pursuant to Regulation 32C of the Financial Advisers Regulations. The material contained in this document is intended solely for accredited, expert or institutional investors, as defined under the Securities and Futures Act (Cap. 289 of Singapore). If there are any matters arising from, or in connection with this material, please contact JSL, located at 80 Raffles Place #15-20, UOB Plaza 2, Singapore 048624, telephone: +65 6551 3950. In Japan, this material is issued and distributed by Jefferies (Japan) Limited to institutional investors only. In Hong Kong, this report is issued and approved by Jefferies Hong Kong Limited and is intended for use only by professional investors as defined in the Hong Kong Securities and Futures Ordinance and its subsidiary legislation. In the Republic of China (Taiwan), this report should not be distributed. The research in relation to this report is conducted outside the People’s Republic of China (“PRC”). This report does not constitute an offer to sell or the solicitation of an offer to buy any securities in the PRC. PRC investors shall have the relevant qualifications to invest in such securities and shall be responsible for obtaining all relevant approvals, licenses, verifications and/or registrations from the relevant governmental authorities themselves. In India, this report is made available by Jefferies India Private Limited. In Australia, this information is issued solely by JIL and is directed solely at wholesale clients within the meaning of the Corporations Act 2001 of Australia (the "Act"), in connection with their consideration of any investment or investment service that is the subject of this document. Any offer or issue that is the subject of this document does not require, and this document is not, a disclosure document or product disclosure statement within the meaning of the Act. JIL is authorised and regulated by the Financial Conduct Authority under the laws of the United Kingdom, which differ from Australian laws. JIL has obtained relief under Australian Securities and Investments Commission Class Order 03/1099, which conditionally exempts it from holding an Australian financial services license under the Act in respect of the provision of certain financial services to wholesale clients. Recipients of this document in any other jurisdictions should inform themselves about and observe any applicable legal requirements in relation to the receipt of this document. This report is not an offer or solicitation of an offer to buy or sell any security or derivative instrument, or to make any investment. Any opinion or estimate constitutes the preparer's best judgment as of the date of preparation, and is subject to change without notice. Jefferies assumes no obligation to maintain or update this report based on subsequent information and events. Jefferies, and their respective officers, directors, and employees, may have long or short positions in, or may buy or sell any of the securities, derivative instruments or other investments mentioned or described herein, either as agent or as principal for their own account. This material is provided solely for informational purposes and is not tailored to any recipient, page 53 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report. MLNX

Initiating Coverage

October 2, 2018 and is not based on, and does not take into account, the particular investment objectives, portfolio holdings, strategy, financial situation, or needs of any recipient. As such, any advice or recommendation in this report may not be suitable for a particular recipient. Jefferies assumes recipients of this report are capable of evaluating the information contained herein and of exercising independent judgment. A recipient of this report should not make any investment decision without first considering whether any advice or recommendation in this report is suitable for the recipient based on the recipient’s particular circumstances and, if appropriate or otherwise needed, seeking professional advice, including tax advice. Jefferies does not perform any suitability or other analysis to check whether an investment decision made by the recipient based on this report is consistent with a recipient’s investment objectives, portfolio holdings, strategy, financial situation, or needs By providing this report, neither JRS nor any other Jefferies entity accepts any authority, discretion, or control over the management of the recipient’s assets. Any action taken by the recipient of this report, based on the information in the report, is at the recipient’s sole judgment and risk. The recipient must perform his or her own independent review of any prospective investment. If the recipient uses the services of Jefferies LLC (or other affiliated broker-dealers), in connection with a purchase or sale of a security that is a subject of these materials, such broker-dealer may act as principal for its own accounts or as agent for another person. Only JRS is registered with the SEC as an investment adviser; and therefore neither Jefferies LLC nor any other Jefferies affiliate has any fiduciary duty in connection with distribution of these reports. The price and value of the investments referred to herein and the income from them may fluctuate. Past performance is not a guide to future performance, future returns are not guaranteed, and a loss of original capital may occur. Fluctuations in exchange rates could have adverse effects on the value or price of, or income derived from, certain investments. This report has been prepared independently of any issuer of securities mentioned herein and not as agent of any issuer of securities. No Equity Research personnel have authority whatsoever to make any representations or warranty on behalf of the issuer(s). Any comments or statements made herein are those of the Jefferies entity producing this report and may differ from the views of other Jefferies entities. This report may contain information obtained from third parties, including ratings from credit ratings agencies such as Standard & Poor’s. Reproduction and distribution of third party content in any form is prohibited except with the prior written permission of the related third party. Jefferies does not guarantee the accuracy, completeness, timeliness or availability of any information, including ratings, and is not responsible for any errors or omissions (negligent or otherwise), regardless of the cause, or for the results obtained from the use of such content. Third-party content providers give no express or implied warranties, including, but not limited to, any warranties of merchantability or fitness for a particular purpose or use. Neither Jefferies nor any third-party content provider shall be liable for any direct, indirect, incidental, exemplary, compensatory, punitive, special or consequential damages, costs, expenses, legal fees, or losses (including lost income or profits and opportunity costs) in connection with any use of their content, including ratings. Credit ratings are statements of opinions and are not statements of fact or recommendations to purchase, hold or sell securities. They do not address the suitability of securities or the suitability of securities for investment purposes, and should not be relied on as investment advice. Jefferies research reports are disseminated and available electronically, and, in some cases, also in printed form. Electronic research is simultaneously made available to all clients. This report or any portion hereof may not be reprinted, sold or redistributed without the written consent of Jefferies. Neither Jefferies nor any of its respective directors, officers or employees, is responsible for guaranteeing the financial success of any investment, or accepts any liability whatsoever for any direct, indirect or consequential damages or losses arising from any use of this report or its contents. Nothing herein shall be construed to waive any liability Jefferies has under applicable U.S. federal or state securities laws. For Important Disclosure information relating to JRS, please see https://adviserinfo.sec.gov/IAPD/Content/Common/crd_iapd_Brochure.aspx? BRCHR_VRSN_ID=483878 and https://adviserinfo.sec.gov/Firm/292142 or visit our website at https://javatar.bluematrix.com/sellside/ Disclosures.action, or www.jefferies.com, or call 1.888.JEFFERIES. © 2018 Jefferies Group LLC

page 54 of 54 George C. Notter, Equity Analyst, (415) 229-1522, [email protected]

Please see important disclosure information on pages 50 - 54 of this report.