Powering Winning Low-Latency Trading Strategies Gaining an Edge Through Server Performance February 2013

Total Page:16

File Type:pdf, Size:1020Kb

Powering Winning Low-Latency Trading Strategies Gaining an Edge Through Server Performance February 2013 An Industry Briefing Researched and Written by POWERING WINNING LOW-LATENCY TRADING STRATEGIES GAINING AN EDGE THROUGH SERVER PERFORMANCE February 2013 In Association with POWERING WINNING LOW-LATENCY TRADING STRATEGIES Introduction It’s appropriate to begin with a little levity: Two campers are walking through the forest when they suddenly encounter a big grizzly bear. The bear rears up on his hind legs and lets out a terrifying roar. Both campers were frozen in their tracks. The first camper whispers “I’m sure glad I wore my running shoes today.” “It doesn’t matter what kind of shoes you’re wearing, you won’t outrun that bear,” replies the second. “I don’t have to outrun the bear, I just have to outrun you,” answers the first. In the world of the financial markets, securing an edge over the competition can mean life or death for a trading firm. Whether it is acting on news alerts or price movements, determining the best trading opportunity, or delivering an order to the marketplace, microseconds mean the difference between winning or just playing. Reducing those microseconds – referred to as latency – is a continuing focus of trading firms, and an increasing challenge as that latency is pushed down to double and single digit microseconds. The “race to zero” becomes increasingly difficult and expensive to engage in as it nears its conclusion. To date, much of the focus on latency reduction has been directed at reducing the physical distance between trading firms and the markets in which they participate, which results in so-called propagation latency. Nowadays, co-location of trading firms’ servers in the same data centers as markets’ matching engines has nearly eradicated that distance and associated latency. With local network latency essentially addressed, an emerging but still challenging area of focus for latency reduction is on trading execution and matching system applications, and on the servers that host them. This industry briefing outlines the low-latency trading landscape, details the latency characteristics of key data and trade execution processing applications, and introduces microprocessor techniques, such as Dell Processor Acceleration Technology, designed to reduce latency in a cost-effective manner. AN INDUSTRY BRIEFING RESEARCHED AND WRITTEN BY LOW-LATENCY.COM FOR DELL 2 POWERING WINNING LOW-LATENCY TRADING STRATEGIES The Business of Low-Latency Trading Market Automation, Fragmentation and Execution Latency Whether in the U.S., Europe, Latin America or Asia/Pacific, exchanges and ATS have in- vested heavily in low-latency automation. In the U.S, competition among these marketplaces was encouraged by the 2007 implementation of Regulation NMS, an initiative by the Secu- rities and Exchange Commission. For trading firms, those exchanges offering the fastest execution times are sought, so that the best price can be achieved before the markets move against them. Today, with 13 regulated equities markets and around 50 ATS in existence, round trip matching times of less than 100 microseconds are commonplace. In Europe, similar cross-country regulation in the form of the Markets in Financial Instru- ments Directive (MiFID) was introduced and exchanges across the continent have engaged in similar competition for order flow, with the SIX Swiss Exchange – leveraging technology from NASDAQ – offering a matching time of less than 40 microseconds. Markets in Asia/ Pacific – from Singapore to Japan to Australia – and in Latin America – where Brazil and Mexico are leading the way – are also following the low-latency matching trend. And it’s not just the cash equities markets that have become fragmented and seen low- latency technology investment. In the U.S., the addition at the end of 2012 of the Miami Options Exchange brought the number of equity options markets to 11, and applications are pending for more. Meanwhile, major futures exchanges, such as the Chicago Mercantile Exchange, NYSE Liffe in London and the Frankfurt-based Eurex, have updated their technol- ogy to reduce matching latency. As a result, trading firms are able to engage in low-latency arbitrage between cash and derivatives markets. New automated marketplaces unrelated to equities – including foreign exchange and fixed income – are also emerging and investing in latency reduction. Markets such as FXAll and Hotspot FX have emerged to support FX HFT strategies, while a number of Swap Execu- tion Facilities are expected to establish themselves and will compete, at least in part, on the latency of their trading functions. Algorithmic and High Frequency Trading As new markets in all asset classes have emerged, and market fragmentation has risen alongside advances in technology, trading firms have adopted new approaches to electronic trading for both their proprietary operations and for their investment management customers. Algorithmic trading – which might be broadly defined as computer-initiated trading of finan- cial instruments – began in the purest sense in the 1980s as a means to trade baskets of securities, sometimes arbitraging between cash and futures markets. During the past few years it has become more widespread and is now directed at a wide range of markets. So-called execution algorithms are widely used by investment management firms to buy or sell large blocks of equities in the marketplace with minimum market impact. These algo- rithms seek out liquidity across trading markets by examining the order books published by each and break down a large order into much smaller ones, trickling them out across mar- kets over an extended period of time. Latency is an important factor in such trading strategies to ensure that best execution is achieved with the minimum of price slippage while orders are being fed into the various mar- kets, which will seek to respond by adjusting bid/offer prices accordingly. AN INDUSTRY BRIEFING RESEARCHED AND WRITTEN BY LOW-LATENCY.COM FOR DELL 3 POWERING WINNING LOW-LATENCY TRADING STRATEGIES For some algorithms, just as important as latency is jitter, or variance of latency from the norm. These strategies take account for known price variances on markets over microsec- ond timespans, and so it is important to keep those timespans consistent for the algorithms to work effectively. High-frequency trading – or HFT – is an important class of algorithmic trading, where trading strategies determine the very rapid buying and selling of an asset, individually or as portfoli- os, with the intent to aggregate small profits per transaction over many trades. For HFT, price slippage must be kept to a minimum for the strategy to be profitable, and hence low-latency execution is a must. Many trading firms access markets via the Direct Market Access (DMA) services of sponsor- ing brokers, who are continuously reducing the latency of their offerings. Against that trend, regulatory pressure is requiring these brokers to implement compliance and risk monitor- ing functions. Thus, implementing this functionality while adding the minimum of latency is important. The Market Data Explosion As a result of market fragmentation and competition, the growth in automated trading of options, derivatives and foreign exchange trading, and the introduction of algorithmic and high frequency trading, the marketplace has witnessed a massive increase – an explosion as some have termed it – in aggregate market data (trade report and quotations) rates. Given that market data is the life blood of algorithmic and HFT strategies, being able to digest it – including never missing an update, processing and storing each – with minimum latency is a crucial first step in the automated trading process. A major challenge to processing market data is coping with peak volumes, which generally occur when markets open and close, but also occur during the day as trading firms react to corporate, economic and political events. Aggregate peaks for U.S. markets have recently been as high as 6.65 million price messages per second (according to www.marketdata- peaks.com), with options market data accounting for much of that. Moreover, despite the current period of low trading volumes, aggregate market data rates are increasing, and marketplaces expect market data rates to continue to increase in future years. For example, OPRA, which consolidates data feeds from the U.S. options markets, is advising market participants to plan for peaks of nearly 13 million messages per second for 2013. Thus, the challenge for automated trading systems is to cope with both high data through- put, in the form of many millions of price updates, and low-latency processing of that data. AN INDUSTRY BRIEFING RESEARCHED AND WRITTEN BY LOW-LATENCY.COM FOR DELL 4 POWERING WINNING LOW-LATENCY TRADING STRATEGIES Low Latency Trading Technology As marketplaces across all asset classes and geographies automate and provide faster matching, so trading firms are reducing the latency of their execution technologies to be competitive with their peers. Broadly speaking, the latency associated with trading infrastructure is related to two components: the latency of moving data from point A to point B; and the latency of processing that data at point A and point B. Much of the focus to date has been on the latency of moving data – propagation latency – between marketplaces and trading firms, and the primary contributor to that latency is the distance between the parties involved. Propagation Latency and Co-Location Reducing propagation latency by making use of fast direct fiber and wireless connections has been a common approach for trading firms, many of which are now leveraging co- location, which involves a firm placing their execution systems in the same data center as the matching engines they are trading against. Within a co-lo data center, connectivity leverages local area networking technology and is measured in 10s of microseconds. Ethernet at 10 GBits/second is the most common technology in place, with switches from the likes of Cisco Systems, Arista Networks, Juniper Networks and Gnodal. Data transports are commonly TCP/IP for transactional data and UDP for the broadcast of market data.
Recommended publications
  • Arista Networks, Inc. (Exact Name of Registrant As Specified in Its Charter)
    UNITED STATES SECURITIES AND EXCHANGE COMMISSION Washington, D.C. 20549 FORM 10-Q (Mark One) ☒ QUARTERLY REPORT PURSUANT TO SECTION 13 OR 15(d) OF THE SECURITIES EXCHANGE ACT OF 1934 For the quarterly period ended September 30, 2019 or ☐ TRANSITION REPORT PURSUANT TO SECTION 13 OR 15(d) OF THE SECURITIES EXCHANGE ACT OF 1934 For the transition period from to Commission File Number: 001-36468 Arista Networks, Inc. (Exact Name of Registrant as Specified in its Charter) Delaware 20-1751121 (State or Other Jurisdiction of Incorporation or Organization) (I.R.S. Employer Identification No.) 5453 Great America Parkway , Santa Clara , California 95054 (Address of principal executive offices) (Zip Code) (408) 547-5500 (Registrant’s telephone number, including area code) Not Applicable (Former name, former address and former fiscal year, if changed since last report) Securities registered pursuant to Section 12(b) of the Act: Title of each class Trading Symbol(s) Name of each exchange on which registered Common Stock, $0.0001 par value ANET New York Stock Exchange Indicate by check mark whether the registrant (1) has filed all reports required to be filed by Section 13 or 15(d) of the Securities Exchange Act of 1934 during the preceding 12 months (or for such shorter period that the registrant was required to file such reports), and (2) has been subject to such filing requirements for the past 90 days. Yes x No o Indicate by check mark whether the registrant has submitted electronically every Interactive Data File required to be submitted pursuant to Rule 405 of Regulation S-T (§232.405 of this chapter) during the preceding 12 months (or for such shorter period that the registrant was required to submit such files).
    [Show full text]
  • STAC Summits
    TM STAC Summit October 16, 2012 Doors open: 9:30am Meeting starts: 10:00am Grand Hyatt New York Manhattan Ballroom Park Avenue at Grand Central Station New York, NY Register at http://www.STACresearch.com/fall2012NYC Platinum Sponsors: Gold Sponsors: Copyright © 2012 STAC. STAC and all STAC names are trademarks or registered trademarks of the Securities Technology Analysis Center, LLC. All other marks are the property of their respective owners. AGENDA Note: Times are approximate. BIG WORKLOADS 10:00am STAC Update – Big Workloads Peter Lankford, Founder & Director, STAC Peter will provide a brief update on the Council's activities related to big-data and big-compute workloads, including benchmarking. 10:10am Parallel Programming Case Study Robert Geva, Parallel Programming Model Architect, Intel An efficient parallel program utilizes all hardware parallel resources. Robert will compare and contrast capabilities of task-parallel technologies and demonstrate the vectorization programming models across both the Intel Xeon (Sandy Bridge) and Intel Xeon Phi (MIC) product lines. Using a case-study approach, he will demonstrate the consistency of parallel programming in these environments. 10:30am Data Management Frontiers A variety of new technologies have emerged to address new challenges in data management, enabling firms to do things they thought unthinkable not long ago. Through a series of short presentations, experts will discuss specific data management challenges and what they view as solutions. Scaling Graph Analytics Venkat Krishnamurthy, Lead Solution Architect, YarcData Graphs have been in the news with increasing frequency, thanks to a realization that a number of interesting problems in specific industries--including trading and investment--are best represented and analyzed as graphs.
    [Show full text]
  • Arista Networks NYSE: ANET Recommendation: BUY
    Arista Networks NYSE: ANET Recommendation: BUY Alex Tullman & Connor O’Brien Investment Thesis Recommendation: Arista Networks is (NYSE: ANET) is a great high growth stock as it holds a position as the high-end provider and lead innovator in a rapidly growing industry, while currently being underpriced because of a market overreaction. Rationale: The current price of $193.68 provides a cheap entry into a company on the supplier side of one of the fastest growing sectors in tech. As a recognized high-end supplier, Arista Network will continue to succeed as long as cloud network services are demanded. 1. Strong tailwinds and growth potential in the cloud network industry 2. Arista Network is known as the high-end provider for specialized cloud network hardware and software 3. Recent 25% drop in share price is an overreaction that has left the stock undervalued Price Target: $230.73 19.17% upside to current price of $193.68 2 Company Overview Overview Management • Arista Networks, Inc. develops, markets, and sells, cloud • President and CEO (2008-Present) networking solutions in the U.S. and internationally • Jayshree – left 15-year career with Cisco to be CEO • Solutions consist of extensible operating systems, a set of network • Chief Development Officer and Chairman (2004-Present) applications, and gigabit Ethernet switching and routing • Bechtolsheim – gigabit startup acquired by Cisco, VP of platforms – focus is on ethernet switches Gigabit Systems Business at Cisco • Contracts with Jabil Circuit, Sanmina Corp., and Foxconn to • Founded Arista Network in 2004 to create a company make its switches more specialized in gigabit ethernet switches • Has approximately 5,500 end customers worldwide in • OG investor in google, founder of Sun Microsystems approximately 86 countries.
    [Show full text]
  • Deploying IP Storage Infrastructures
    White Paper Deploying IP Storage Infrastructures The cost of deploying and maintaining traditional storage networks is growing at an exponential rate. New requirements for compliance and new applications such as analytics mean that ever-increasing volumes of unstructured data are collected and archived. Legacy storage networks cannot meet the need to scale-out capacity while reducing capital and operational expenditures. In response to this challenge, new storage architectures based on IP/Ethernet have evolved, and are being adopted at an ever-increasing rate. While technologies such as NAS and iSCSI are widely deployed, Fibre Channel SANs have maintained a dwindling but still strong presence in enterprise architectures. This paradigm is beginning to change as scale-out storage systems enabled by Software Defined Storage (SDS) are increasingly mature. The ability to reclaim stranded Direct Attached Storage (DAS) assets in a server infrastructure combined with the efficiencies gained by running storage traffic over the same IP/Ethernet network as standard data traffic provides an undeniably strong opportunity to reduce both the capex and opex required to deploy and run a storage infrastructure. According to some estimates, as much as a 60% capex reduction is achievable by deploying an SDS architecture which leverages server DAS to create a VSAN. Even without such radical savings, scale-out storage relying on IP/Ethernet offers dramatic savings over traditional Fibre Channel based storage. Most future IT asset deployments will leverage 10 Gigabit Ethernet (10GbE) and now 40 Gigabit Ethernet [40GbE]) for the underlying storage interconnect for newer applications. Traditional datacenter networks however are not designed to support the traffic patterns and loss characteristics to reliably deploy an IP/Ethernet storage infrastructure.
    [Show full text]
  • National Centre for Nuclear Research
    Case Study National Centre for Nuclear Research Arista helps National Centre for Nuclear Research build a Highlights low latency and high performance network infrastructure to support supercomputing excellence Challenge Polish National Centre for Nuclear Research and its Świerk Computing Centre needed to improve its core network infrastructure to facilitate the upgrade of its supercomputing resources and provide a more efficient foundation for further The Polish National Centre for Nuclear Research provides a growth. catalyst for pure research and numerous practical applications Solutions across science and the wider economy. In response to • Arista 7050 Series Switches demands from the Polish national energy sector, the Centre • Arista EOS® established the Świerk Computing Centre and began a major program to upgrade its critical infrastructure to increase • Coraid EtherDrive performance with the ability to scale further over the next Results decade. Through switching to Arista, Świerk has dramatically • Wire-speed performance on all switching ports increased its uplink connections bandwidth, while reducing latency and built the foundation to scale its IT resources in line • Significant reduction in latency with new groundbreaking scientific and commercial projects. • Dramatically lower energy consumption • Switch to MLAG doubles bandwidth capacity • HPC application running directly within the switch boost computational performance • Advanced admin tools (XMPP, JSON RPC) reduce management complexity and speed up configuration changes arista.com Case Study Project Background With over 1000 employees, the National Centre for Nuclear Research is one of the oldest and the largest research institutes in Poland. Since its inception in 1955, it has established a world-class combination of pure research and numerous practical applications for science and the wider economy.
    [Show full text]
  • Software Driven Cloud Networking
    White Paper Software Driven Cloud Networking Arista Networks, a leader in high-speed, highly programmable datacenter switching, has outlined a number of guiding principles for network designs serving private cloud, public cloud, enterprise and high-performance network use cases. Arista’s Software Driven Cloud Networking approach incorporates software capabilities of our Extensible Operating System (EOS®) and CloudVision® software to provide seamless and consistent operational experiences and workflow integration across any cloud infrastructure. Emerging third-party cloud orchestration technologies and services, and cloud provider infrastructures, complement Arista’s datacenter platforms by automating workgroup policies and provisioning within a broader integrated hybrid cloud IT infrastructure. Arista defines the combination of cloud automation technologies and Arista EOS-based Universal Cloud Network designs as Software Driven Cloud Networking. Integration targets for our Cloud Networking solutions include standards-based network virtualization controllers, network security services, hypervisors, container management systems, automated compute and storage clusters, cloud orchestration middleware, IT support systems and customized flow-based forwarding agents. arista.com White Paper Cloud Technology Shift High-performance Ethernet networks have evolved significantly since their inception in the late 1980s, with many evolutionary changes leading to new networking solution categories. The datacenter switching category, now extending widely into the private and public cloud infrastructure, has emerged as a unique high-growth category, demanding dense 10-100Gbps Ethernet switching at massive scales and unprecedented price/performance levels as its leading enabling characteristic. Beyond considerable speed progressions over the last two decades, datacenter switching also demands that networks support maximized performance at breakthrough economics, providing cost-effective expansion without redesign or reversals in architectural approaches.
    [Show full text]
  • Arista Networks, Inc. 2015 Annual Report Dear Arista Networks Stockholders: I Am Pleased to Report That Arista Networks Had a Very Successful 2015 Fiscal Year
    Arista Networks, Inc. 2015 Annual Report Dear Arista Networks Stockholders: I am pleased to report that Arista Networks had a very successful 2015 fiscal year. In reflecting on 2015, we are extremely proud of the company’s growth as well as its technological leadership. Our team has remained unwavering in our commitment to innovation, and we continually strive to drive value for our stockholders, cus- tomers, partners and employees. 2015 Highlights Summary: • We announced 26 new leaf/spline platforms for 10/25/40/50/100GbE all based on our single binary- image software utilizing diverse silicon architectures. • We grew revenue profitably by 43.4% to $837.6m. Our revenue growth is driven by our innovative platforms, differentiated Arista EOS stack and orchestration with CloudVision®. We have over 3,700 customers and continue to add new customers expanding our market presence and geographic foot- print. • We were recognized as a leader in Gartner’s 2015 Magic Quadrant for Data Center Networking. • We announced CloudVision, a network-wide approach for workload orchestration and workflow auto- mation. This is a turnkey solution for cloud networking enabling enterprises to more easily realize the benefits of cloud-class automation. • We announced a new service capability for CloudVision called Macro-Segmentation Service (MSS™). MSS provides automated insertion of Security and other in-line L4-7 services within any Software Driven Cloud Networking Infrastructure. MSS has been endorsed by our technology alliance ecosys- tem partners VMware, Palo Alto Networks, Check Point Software, Fortinet, and F5 Networks who are each working with us to deliver MSS support for their platforms.
    [Show full text]
  • Media & Entertainment Solution Guide
    Solution Guide Cloud Grade Networking for Broadcast, Digital Media Creation, Post Production, Content Distribution and Storage Developments in digital broadcast, VFX, and animation technologies are expanding the creative horizons of producers and talent. Concurrently, the growing demand and means of consumption for such content is astounding! The scale and complexity is measured by the number and types of devices connected to the Internet. Content providers are innovating and expanding IP based distribution solutions to differentiate their offerings and adapt to growing audience demands. Likewise, producers are identifying and creating new content using real time, multi-stream workflows made possible with multi-vendor production systems running on Ethernet-IP switching. Given the global scale of audience and competition, content providers and producers are migrating their creative, production and distribution workloads to IP based Ethernet infrastructures to deliver a more productive, streamlined, and cost effective solution. Introduction Accelerating Content Growth Media production companies, be they pre- or post- production, or real-time • Increases capacity to handle more broadcast, are dealing with an explosion of content as they race their productions visually immersive theatrical to market. Digital capture and editing technologies, along with next-generation workloads and broadcast animation workflows, create multiple image streams that must be parsed among workstreams realtime and non-linear broadcast workflows, and then to IP destinations of varying formats. • Converges content creation, rights management, transcoding CGI and animation production studios were the first to leverage price/performance and distribution gains and investment protection of mainstream IP based Ethernet infrastructures. Image, file and transport protocols streamlined file based workflows, allowing the • Delivers an open, standards- use of COTS compute platforms running over standard IP Ethernet networks.
    [Show full text]
  • Layer 2 Leaf & Spine Design and Deployment Guide
    Design Guide Layer 2 Leaf & Spine Design and Deployment Guide The intention of this guide is to provide a systematic and well thought out series of steps to assist the reader with the design and deployment of a Layer 2 Leaf and Spine (L2LS) topology. The example deployment is based on a design which meets a set of predefined requirements as listed in the System Requirements section of this guide. This guide was written for network engineers, administrators and cloud architects but can be informative to any role dealing with layer 2 networking. A good working knowledge of networking principles is assumed. The guide is broken down into four main sections as noted below, each section begins with a high level overview and gets progressively more detailed. • System Requirements • Design Overview • Detailed Design • Configuration Examples arista.com Design Guide Table of contents CONTENTS The Drivers for Layer 2 Leaf + Spine Topologies 4 System Requirements 4 Arista Universal Cloud Network (UCN) Architecture 5 Design Overview 5 A Universal Spine 5 Multi-chassis Link Aggregation 6 Leaf Options 6 LACP 8 Spanning-Tree 8 First Hop Redundancy Protocols 8 VARP 8 Multicast 9 Layer 2 Multicast 9 Layer 3 Multicast 9 IGMP Snooping 9 IGMP Snooping Querier 9 IGMP Snooping Mrouter 9 Link Layer Discovery Protocol 10 Detailed Design 10 Leaf Design Considerations 11 Interfaces and Port Densities 11 Transceivers and Cables 11 arista.com 2 Design Guide Leaf Uplinks 11 Traffic Load Balancing 11 Table Sizes 12 Single-Homed Workloads 12 Dual-Homed Workloads 12
    [Show full text]
  • VXLAN with Cloudvision®
    Product Brief VXLAN With CloudVision® VXLAN and CloudVision Empower Administrators to Inside Deliver Scalable and Reliable Multi-Tenant Data Center Networks VXLAN A new networking technology developed Arista and VMware co-authored the Virtual eXtensible by Arista and VMware that provides LAN (VXLAN) protocol to provide virtual overlay networks multitenant L2 networks over an IP based and extend the cost efficiency gains of virtualized servers infrastructure. VXLAN combines the (hypervisors) in the data center. VXLAN encapsulates network benefits of VLANs with all the stability and scaling of routed IP networks traffic of virtualized workloads in standard IP packets. As CloudVision a result, multiple VXLAN virtual networks can run over the An EOS®-based platform that provides same physical infrastructure to reduce CAPEX. VXLAN runs services for: over any standard IP network and benefits from the scaling, • Exchange point for networkwide performance and reliability characteristics available in current services. IP data center networks. Standards based IP underlays are • Provisioning and change management open and permit best of breed, cost efficient, multi-vendor automation DC environments. IP underlay networks are also more reliable, • Multi-vendor controller integration reducing costly network outages in a VXLAN data center • VXLAN Control Services (VCS) infrastructure. VXLAN doesn’t rely on dated spanning tree Why These Technologies protocols, TRILL fabrics or 802.1Q vlan tagging mechanisms VXLAN isolates multiple workloads in that offer limited reliability, fault isolation and scaling. the converged DC while running over IP to deliver unprecedented reliability and VXLAN allows administrators to deploy up to 16.7 million scalability. Converged virtual networks deliver more workloads across common infrastructure, thus reducing costs.
    [Show full text]
  • Cognitive Campus White Paper
    White Paper Arista Cognitive Campus Network The recent global upheaval has forced an unexpected sea-change upon corporate business workflows and campus networks. Workers and network administrators not only have to embrace a new model of the diffused campus workspace, but also adapt to the revised security, support and collaboration challenges imposed by social distancing, contact tracing and an amplified reliance of collaboration tools that are evermore business critical. Furthermore, campus IoT device deployments are exploding, as more use cases are becoming commonplace in the distributed workforce; amplifying workforce productivity, and providing better monitoring of workloads, workers and physical workspaces. While the relentless price/performance improvements of campus switched LAN and WiFi technologies remains an important criteria for infrastructure upgrades, there is a new emphasis on automation, telemetry, and decision support analytics to offload and streamline day to day management activities from overly burdened NetOp and SecOp teams. Additionally, there are requirements for automated deployment, declarative configuration management, and simplified remediation, that allow a more diverse group of net admins to realize point and click, zero touch campus network deployments and maintenance. Network administrators look for standards based solutions delivering high quality, ease of maintainability and a simplified administrative experience to facilitate templatized deployments, and help administrators build on industry best practices and their networking experience. Furthermore, as enterprises continue to drive operational efficiencies with new use cases, leveraging net based apps and specialized IoT devices, administrators look more to automation systems and tools to fulfill the business imperatives of predictable, repeatable and successful outcomes in the management of their ever growing network campus workspace.
    [Show full text]
  • Big Iron, Fast Photons and SDN: the Hard, Bright and Soft Sides of Arista's Strategy
    Big iron, fast photons and SDN: the hard, bright and soft sides of Arista's strategy Analyst: Eric Hanselman Peter Christy 2 Jul, 2013 It's been a busy spring for Arista Networks, with expanded software functionality, upgraded switch modules and a Best of Interop/Grand Prize win for its 7500 Series modular switches. The new software capabilities include integration of its Arista EOS switch operating system with OpenStack (enabling fully automatic provisioning); a contribution to the OpenStack Neutron networking project (formerly OpenStack Quantum); OpenFlow support with a controller-less option; and a significant enhancement of its eAPI interfaces for EOS. The hardware upgrades encompass big increases in capacity and port density for its 7500 Series switches, as well as hardware VXLAN support and an approach to 100Gb Ethernet interfaces that breaks new ground in terms of price/performance and flexibility. All of these enhancements are delivered with what Arista pitches as a stress-free migration path for existing customers. The 451 Take While Arista's original product designs could have claimed to do SDN 'before SDN was cool,' the vendor's latest announcements extend integration into the major SDN camps of OpenStack and OpenFlow, and also increase programmability. These enhancements can offer value in both simplified datacenter cores and complex multi-tenant networks. Arista is also building on its high-performance legacy, leveraging its relatively modern, clean software design and merchant-silicon-based hardware to add a level of capacity in its large-chassis switch products that should challenge rivals. A simple upgrade path will please Arista customers, but the true innovation value may lie in a gamble on 10/40/100Gb Ethernet optics, whose lower cost could really pay off.
    [Show full text]