Optimizing Network Performance in Distributed Machine Learning

Total Page:16

File Type:pdf, Size:1020Kb

Optimizing Network Performance in Distributed Machine Learning Optimizing Network Performance in Distributed Machine Learning Luo Mai Chuntao Hong Paolo Costa Imperial College London Microsoft Research Microsoft Research Abstract model across many servers and feed each replica with a subset of the input data. Since the model replicas are To cope with the ever growing availability of training trained using different input data, their model parameters data, there have been several proposals to scale machine will typically diverge. To reconcile these parameters and learning computation beyond a single server and dis- ensure that all model replicas eventually converge, each tribute it across a cluster. While this enables reducing replica periodically pushes its set of parameter values to a the training time, the observed speed up is often limited centralized server, called the parameter server [24]. The by network bottlenecks. latter aggregates all the received updates for each param- To address this, we design MLNET, a host-based com- eter (e.g., by averaging them) and then sends back to all munication layer that aims to improve the network per- replicas the newly computed set of values, which will be formance of distributed machine learning systems. This used at the beginning of the next iteration. As the total is achieved through a combination of traffic reduction numbers of parameters can be very high (up to 1012 [9]), techniques (to diminish network load in the core and multiple parameter servers are used, with each one being at the edges) and traffic management (to reduce average responsible for a subset of the parameters. training time). A key feature of MLNET is its compati- bility with existing hardware and software infrastructure A major challenge of this approach is the high com- so it can be immediately deployed. munication cost. Model replicas must frequently read and write global shared parameters. This generates a We describe the main techniques underpinning ML- large amount of network traffic and, due to the sequen- NET and show through simulation that the overall train- tial nature of many of the machine learning algorithms ing time can be reduced by up to 78%. While prelimi- used, it may also stall the computation if the synchro- nary, our results indicate the critical role played by the nization latency is high. Therefore, the network is of- network and the benefits of introducing a new commu- ten regarded as one of the main bottlenecks for dis- nication layer to increase the performance of distributed tributed machine learning systems [13, 16, 25]. To al- machine learning systems. leviate this issue, these systems are often deployed on high-performance network fabrics such as Infiniband or 1 Introduction RoCE [13, 23], while others have proposed to trade-off algorithm training efficiency for system performance by Over the last decade, machine learning has witnessed an introducing asynchronous communication [10, 24], thus increasing wave of popularity across several domains, in- removing some of the barriers. Unfortunately, neither of cluding web search, image and speech recognition, text these approaches is completely satisfactory as the former processing, gaming, and health care. A key factor caus- significantly increases infrastructure costs while the lat- ing this trend is the availability of large amounts of data ter reduces overall training efficiency. that can be used for training purposes. This has led In this paper, we explore a different yet complemen- to the appearance of several proposals aiming at scal- tary point of the design space. We argue that network ing out the computation by distributing it across many bottlenecks can be greatly reduced through a customized servers [1, 13, 16, 24, 34, 35]. communication layer. To demonstrate this, we designed Typically, these systems adopt an approach referred to MLNET, a novel communication layer for distributed as data parallelism [16]. Rather than training a single machine learning. MLNET uses tree-based overlays to model with all the available input data, they replicate the implement distributed aggregation and multicast and re- duce network traffic, and relies on traffic control and pri- In a distributed setting, a server iteratively refines a oritization to improve average training time. shared model by learning from a local data partition, A key constraint underlying our design is that we and periodically synchronizes this model with the other wanted MLNET to be a drop-in solution for existing ma- servers. More specifically, after each iteration, it calcu- chine learning deployments. Therefore, we implemented lates a refinement DWi to the model W. To make sure that MLNET as a user-space process running on hosts with- all servers eventually converge to the same model, they out requiring any changes in the networking hardware, in can synchronize every iteration, every n iterations, or the OS stack, or in the training algorithm code. Further, completely asynchronously. When machine learning al- by sitting in between workers and parameter servers, it gorithms are implemented on traditional distributed plat- decouples these two classes of servers, enabling scal- forms such as Hadoop [38] or Spark [39], servers have to ing each one independently and efficiently masking net- synchronize every iteration. This requires placing a bar- work and server failures. We evaluate its effectiveness rier at the end of a iteration, incurring increasing over- in Section 4 by means of large-scale simulations with head as the system scales. 800 servers and 50 to 400 parameter servers. The results Parameter Server [24] is another approach to imple- show that MLNET reduces the training time by a factor ment synchronization in distributed machine learning. of up to 5x. It outperforms the aforementioned platforms thanks to domain-specific engineering and algorithmic optimiza- 2 Background tions. In this approach, a set of servers act as param- eter servers that store the model W. The other servers In this section, we provide a brief introduction to ma- process the training data and act as workers. After DWi chine learning, motivate the need for its distributed exe- are calculated, workers do not communicate with each cution, and discuss the use of parameter servers for scal- other directly, but push DWi to the parameter servers ing distributed machine learning. and then pull a new W to be used in the next itera- tion. By tuning push/pull frequencies, programmers can balance the training efficiency and system performance. 2.1 Machine Learning For example, in the Stale Synchronous Parallel (SSP) The goal of a machine learning algorithm is to construct model [12, 15], workers are allowed to cache W and use a prediction model that extracts useful knowledge from it in the next iteration while sending the DWi of the pre- training data, and uses it to make inferences about future vious iteration to servers, as long as the cached version arrival data. This can be formalized as an optimization is within a staleness threshold s. In this way, communi- problem: Given a set of training data X, it tries to find a cation can be overlapped with computation. Neverthe- model W that minimizes the error of a prediction func- less, as workers still have to periodically synchronize the tion F(X;W). Typically, a machine learning algorithm model, the network can quickly become a bottleneck. In approaches this problem iteratively, starting from a ran- the next section we show how MLNET can alleviate this domly generated W and then refining its solution gradu- problem using a customized communication layer. ally as more and more data are processed. Complex models are usually able to capture the 3 MLNET Design knowledge hidden in training data. To an extreme, a suf- ficiently complex model can “memorize” all the infor- We begin the section by describing the MLNET archi- mation contained in the data. In this case, it can give the tecture and then we show how this makes it easy to im- correct prediction for any sample it has seen before, but plement our two techniques to optimize network perfor- may perform poorly for unseen samples. This is called mance, namely traffic reduction and traffic prioritization. over-fitting: a model fits its training data well, but does While we believe that this is a contribution per se, we not generalize to others. This is why a large amount of also see this a first step towards a deeper rethinking of training data is necessary for machine learning. By using the network design for distributed machine learning. We more data, a model can generalize sufficiently, reducing will elaborate on this point in Section 5. the risk of over-fitting. 3.1 Architecture 2.2 Distributed Machine Learning MLNET is a communication layer, running as a local As the size of training data can significantly affect pre- process on workers and parameter servers, similar to diction accuracy, it has become common practice to train Facebook mcrouter setup [30]. These local processes models with large datasets. To speedup these training behave like proxy, intercepting all exchanges between tasks, they are often distributed across many servers. workers and parameter servers. To push training results, 2 a worker initiates a normal TCP connection to a param- crementally and the final result is still correct. We lever- eter server that is actually emulated by a local MLNET age this property to reduce traffic during the push phase. process. Regardless of the actual numbers of parameter For each parameter server, MLNET uses it as the root servers and workers, MLNET maintains a single param- and builds a spanning tree connecting all workers. The eter server abstraction to workers, and symmetrically a workers in the leaves send gradients to their parent nodes. single worker abstraction to parameter servers. This de- The latter aggregate all received gradients and push the sign decouples workers and parameter servers and allows results upstream towards the root where the last step of them to scale in and out independently.
Recommended publications
  • A Survey of Network Performance Monitoring Tools
    A Survey of Network Performance Monitoring Tools Travis Keshav -- [email protected] Abstract In today's world of networks, it is not enough simply to have a network; assuring its optimal performance is key. This paper analyzes several facets of Network Performance Monitoring, evaluating several motivations as well as examining many commercial and public domain products. Keywords: network performance monitoring, application monitoring, flow monitoring, packet capture, sniffing, wireless networks, path analysis, bandwidth analysis, network monitoring platforms, Ethereal, Netflow, tcpdump, Wireshark, Ciscoworks Table Of Contents 1. Introduction 2. Application & Host-Based Monitoring 2.1 Basis of Application & Host-Based Monitoring 2.2 Public Domain Application & Host-Based Monitoring Tools 2.3 Commercial Application & Host-Based Monitoring Tools 3. Flow Monitoring 3.1 Basis of Flow Monitoring 3.2 Public Domain Flow Monitoring Tools 3.3 Commercial Flow Monitoring Protocols 4. Packet Capture/Sniffing 4.1 Basis of Packet Capture/Sniffing 4.2 Public Domain Packet Capture/Sniffing Tools 4.3 Commercial Packet Capture/Sniffing Tools 5. Path/Bandwidth Analysis 5.1 Basis of Path/Bandwidth Analysis 5.2 Public Domain Path/Bandwidth Analysis Tools 6. Wireless Network Monitoring 6.1 Basis of Wireless Network Monitoring 6.2 Public Domain Wireless Network Monitoring Tools 6.3 Commercial Wireless Network Monitoring Tools 7. Network Monitoring Platforms 7.1 Basis of Network Monitoring Platforms 7.2 Commercial Network Monitoring Platforms 8. Conclusion 9. References and Acronyms 1.0 Introduction http://www.cse.wustl.edu/~jain/cse567-06/ftp/net_perf_monitors/index.html 1 of 20 In today's world of networks, it is not enough simply to have a network; assuring its optimal performance is key.
    [Show full text]
  • An Introduction
    This chapter is from Social Media Mining: An Introduction. By Reza Zafarani, Mohammad Ali Abbasi, and Huan Liu. Cambridge University Press, 2014. Draft version: April 20, 2014. Complete Draft and Slides Available at: http://dmml.asu.edu/smm Chapter 10 Behavior Analytics What motivates individuals to join an online group? When individuals abandon social media sites, where do they migrate to? Can we predict box office revenues for movies from tweets posted by individuals? These questions are a few of many whose answers require us to analyze or predict behaviors on social media. Individuals exhibit different behaviors in social media: as individuals or as part of a broader collective behavior. When discussing individual be- havior, our focus is on one individual. Collective behavior emerges when a population of individuals behave in a similar way with or without coordi- nation or planning. In this chapter we provide examples of individual and collective be- haviors and elaborate techniques used to analyze, model, and predict these behaviors. 10.1 Individual Behavior We read online news; comment on posts, blogs, and videos; write reviews for products; post; like; share; tweet; rate; recommend; listen to music; and watch videos, among many other daily behaviors that we exhibit on social media. What are the types of individual behavior that leave a trace on social media? We can generally categorize individual online behavior into three cate- gories (shown in Figure 10.1): 319 Figure 10.1: Individual Behavior. 1. User-User Behavior. This is the behavior individuals exhibit with re- spect to other individuals. For instance, when befriending someone, sending a message to another individual, playing games, following, inviting, blocking, subscribing, or chatting, we are demonstrating a user-user behavior.
    [Show full text]
  • Three-Dimensional Integrated Circuit Design: EDA, Design And
    Integrated Circuits and Systems Series Editor Anantha Chandrakasan, Massachusetts Institute of Technology Cambridge, Massachusetts For other titles published in this series, go to http://www.springer.com/series/7236 Yuan Xie · Jason Cong · Sachin Sapatnekar Editors Three-Dimensional Integrated Circuit Design EDA, Design and Microarchitectures 123 Editors Yuan Xie Jason Cong Department of Computer Science and Department of Computer Science Engineering University of California, Los Angeles Pennsylvania State University [email protected] [email protected] Sachin Sapatnekar Department of Electrical and Computer Engineering University of Minnesota [email protected] ISBN 978-1-4419-0783-7 e-ISBN 978-1-4419-0784-4 DOI 10.1007/978-1-4419-0784-4 Springer New York Dordrecht Heidelberg London Library of Congress Control Number: 2009939282 © Springer Science+Business Media, LLC 2010 All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com) Foreword We live in a time of great change.
    [Show full text]
  • Learning to Predict End-To-End Network Performance
    CORE Metadata, citation and similar papers at core.ac.uk Provided by BICTEL/e - ULg UNIVERSITÉ DE LIÈGE Faculté des Sciences Appliquées Institut d’Électricité Montefiore RUN - Research Unit in Networking Learning to Predict End-to-End Network Performance Yongjun Liao Thèse présentée en vue de l’obtention du titre de Docteur en Sciences de l’Ingénieur Année Académique 2012-2013 ii iii Abstract The knowledge of end-to-end network performance is essential to many Internet applications and systems including traffic engineering, content distribution networks, overlay routing, application- level multicast, and peer-to-peer applications. On the one hand, such knowledge allows service providers to adjust their services according to the dynamic network conditions. On the other hand, as many systems are flexible in choosing their communication paths and targets, knowing network performance enables to optimize services by e.g. intelligent path selection. In the networking field, end-to-end network performance refers to some property of a network path measured by various metrics such as round-trip time (RTT), available bandwidth (ABW) and packet loss rate (PLR). While much progress has been made in network measurement, a main challenge in the acquisition of network performance on large-scale networks is the quadratical growth of the measurement overheads with respect to the number of network nodes, which ren- ders the active probing of all paths infeasible. Thus, a natural idea is to measure a small set of paths and then predict the others where there are no direct measurements. This understanding has motivated numerous research on approaches to network performance prediction.
    [Show full text]
  • Designing Vertical Bandwidth Reconfigurable 3D Nocs for Many
    Designing Vertical Bandwidth Reconfigurable 3D NoCs for Many Core Systems Qiaosha Zou∗, Jia Zhany, Fen Gez, Matt Poremba∗, Yuan Xiey ∗ Computer Science and Engineering, The Pennsylvania State University, USA y Electrical and Computer Engineering, University of California, Santa Barbara, USA z Nanjing University of Aeronautics and Astronautics, China Email: ∗fqszou, [email protected], yfjzhan, [email protected], z [email protected] Abstract—As the number of processing elements increases in a single Routers chip, the interconnect backbone becomes more and more stressed when Core 1 Cache/ Cache/Memory serving frequent memory and cache accesses. Network-on-Chip (NoC) Memory has emerged as a potential solution to provide a flexible and scalable Core 0 interconnect in a planar platform. In the mean time, three-dimensional TSVs (3D) integration technology pushes circuit design beyond Moore’s law and provides short vertical connections between different layers. As a result, the innovative solution that combines 3D integrations and NoC designs can further enhance the system performance. However, due Core 0 Core 1 to the unpredictable workload characteristics, NoC may suffer from intermittent congestions and channel overflows, especially when the (a) (b) network bandwidth is limited by the area and energy budget. In this work, we explore the performance bottlenecks in 3D NoC, and then leverage Fig. 1. Overview of core-to-cache/memory 3D stacking. (a). Cores are redundant TSVs, which are conventionally used for fault tolerance only, allocated in all layers with part of cache/memory; (b). All cores are located as vertical links to provide additional channel bandwidth for instant in the same layers while cache/memory in others.
    [Show full text]
  • A Social Network Matrix for Implicit and Explicit Social Network Plates
    A Social Network Matrix for Implicit and Explicit Social Network Plates Wei Zhou1;4, Wenjing Duan2, Selwyn Piramuthu3;4 1Information & Operations Management, ESCP Europe, Paris, France 2Information Systems and Technology Management, George Washington University, U.S.A. 3Information Systems and Operations Management, University of Florida, U.S.A. Gainesville, Florida 32611-7169, USA 4RFID European Lab, Paris, France. [email protected], [email protected], selwyn@ufl.edu Abstract A majority of social network research deals with explicitly formed social networks. Although only rarely acknowledged for its existence, we believe that implicit social networks play a significant role in the overall dynamics of social networks. We propose a framework to evalu- ate the dynamics and characteristics of a set of explicit and associated implicit social networks. Specifically, we propose a social network ma- trix to measure the implicit relationships among the entities in various social networks. We also derive several indicators to characterize the dynamics in online social networks. We proceed by incorporating im- plicit social networks in a traditional network flow context to evaluate key network performance indicators such as the lowest communication cost, maximum information flow, and the budgetary constraints. Keywords: Implicit Social Network, Online Social Network 1 1 Introduction In recent years, several online social network platforms have witnessed huge public attention from social and financial perspectives. However, there are different facets to this interest. Facebook, for example, has gained a large set of users but failed to excel on profitability. Online social networks can be broadly classified as explicit and implicit social networks. Explicit social networks (e.g., Facebook, LinkedIn, Twitter, and MySpace) are where the users define the network by explicitly connect- ing with other users, possibly, but not necessarily, based on shared interests.
    [Show full text]
  • Social Network Analysis and Information Propagation: a Case Study Using Flickr and Youtube Networks
    International Journal of Future Computer and Communication, Vol. 2, No. 3, June 2013 Social Network Analysis and Information Propagation: A Case Study Using Flickr and YouTube Networks Samir Akrouf, Laifa Meriem, Belayadi Yahia, and Mouhoub Nasser Eddine, Member, IACSIT 1 makes decisions based on what other people do because their Abstract—Social media and Social Network Analysis (SNA) decisions may reflect information that they have and he or acquired a huge popularity and represent one of the most she does not. This concept is called ″herding″ or ″information important social and computer science phenomena of recent cascades″. Therefore, analyzing the flow of information on years. One of the most studied problems in this research area is influence and information propagation. The aim of this paper is social media and predicting users’ influence in a network to analyze the information diffusion process and predict the became so important to make various kinds of advantages influence (represented by the rate of infected nodes at the end of and decisions. In [2]-[3], the marketing strategies were the diffusion process) of an initial set of nodes in two networks: enhanced with a word-of-mouth approach using probabilistic Flickr user’s contacts and YouTube videos users commenting models of interactions to choose the best viral marketing plan. these videos. These networks are dissimilar in their structure Some other researchers focused on information diffusion in (size, type, diameter, density, components), and the type of the relationships (explicit relationship represented by the contacts certain special cases. Given an example, the study of Sadikov links, and implicit relationship created by commenting on et.al [6], where they addressed the problem of missing data in videos), they are extracted using NodeXL tool.
    [Show full text]
  • Network Performance Analysis
    Network Performance Analysis Guido Sanguinetti (based on notes by Neil Lawrence) Autumn Term 2007-2008 2 Schedule Lectures will take place in the following locations: Mondays 15:10 - 17:00 pm (St Georges LT1). Tuesdays 16:10 pm - 17:00 pm (St Georges LT1). Lab Classes In the Lewin Lab, Regent Court G12 (Monday Weeks 3 and 8). Week 1 Monday Lectures 1 & 2 Networks and Probability Review Tuesday Lecture 3 Poisson Processes Week 2 Monday Lecture 4 & 5 Birth Death Processes, M=M=1 queues Tuesday Lecture 6 Little's Formula Week 3 Monday Lab Class 1 Discrete Simulation of the M=M=1 Queue Tuesday Tutorial 1 Probability and Poisson Processes Week 4 Monday Lectures 7 & 8 Other Markov Queues and Erlang Delay Tuesday Lab Review 1 Review of Lab Class 1 Week 5 Monday Lectures 9 & 10 Erlang Loss and Markov Queues Review Tuesday Tutorial 2 M=M=1 and Little's Formula Week 6 Monday Lectures 11 & 12 M=G=1 and M=G=1 with Vacations Tuesday Lecture 13 M=G=1 contd Week 7 Monday Lectures 14 & 15 Networks of Queues, Jackson's Theorem Tuesday Tutorial 3 Erlang Delay and Loss, M=G=1 Week 8 Monday Lab Class 2 Simulation of Queues in NS2 Tuesday Lecture 16 ALOHA Week 9 Tuesday Lab Review 2 Review of NS2 Lab Class Week 10 Exam Practice Practice 2004-2005 Exam Week 12 Monday Exam Review Review of 2004-2005 Exam Evaluation This course is 100% exam. 3 4 Reading List The course is based primarily on Bertsekas and Gallager [1992].
    [Show full text]
  • A Framework for a Bandwidth Based Network Performance Model for CS Students
    A Framework for a Bandwidth Based Network Performance Model for CS Students D. Veal, G. Kohli, S. P. Maj J. Cooper Edith Cowan University Curtin University of Technology Western Australia Western Australia [email protected] Abstract There are currently various methods by which network and internetwork performance can be addressed. Examples include simulation modeling and analytical modeling which often results in models that are highly complex and often mathematically based (e.g. queuing theory). The authors have developed a new model which is based upon simple formulae derived from an investigation of computer networks. This model provides a conceptual framework for performance analysis based upon bandwidth considerations from a constructivist perspective. The bandwidth centric model uses high level abstraction decoupled from the implementation details of the underlying technology. This paper represents an initial attempt to develop this theory further in the field of networking education and presents a description of some of our work to date. This paper also includes details of experiments undertaken to measure bandwidth and formulae derived and applied to the investigation of converging data streams. Introduction There are many ways of approaching computer networking education. Davies notes that: “Network courses are often based on one or more of the following areas: The OSI model; Performance analysis; and Network simulation” 1. The OSI model is a popular approach that is used extensively in the Cisco Networking Academy Program (CNAP) 2 and in other Cisco learning materials. With respect to simulation Davis describes the Optimized Network Engineering Tools (OPNET) system that that can model networks and sub-networks, individual nodes and stations and state transition models that defines a node 1.
    [Show full text]
  • Network Information Theory
    Network Information Theory This comprehensive treatment of network information theory and its applications pro- vides the first unified coverage of both classical and recent results. With an approach that balances the introduction of new models and new coding techniques, readers are guided through Shannon’s point-to-point information theory, single-hop networks, multihop networks, and extensions to distributed computing, secrecy, wireless communication, and networking. Elementary mathematical tools and techniques are used throughout, requiring only basic knowledge of probability, whilst unified proofs of coding theorems are based on a few simple lemmas, making the text accessible to newcomers. Key topics covered include successive cancellation and superposition coding, MIMO wireless com- munication, network coding, and cooperative relaying. Also covered are feedback and interactive communication, capacity approximations and scaling laws, and asynchronous and random access channels. This book is ideal for use in the classroom, for self-study, and as a reference for researchers and engineers in industry and academia. Abbas El Gamal is the Hitachi America Chaired Professor in the School of Engineering and the Director of the Information Systems Laboratory in the Department of Electri- cal Engineering at Stanford University. In the field of network information theory, he is best known for his seminal contributions to the relay, broadcast, and interference chan- nels; multiple description coding; coding for noisy networks; and energy-efficient packet scheduling and throughput–delay tradeoffs in wireless networks. He is a Fellow of IEEE and the winner of the 2012 Claude E. Shannon Award, the highest honor in the field of information theory. Young-Han Kim is an Assistant Professor in the Department of Electrical and Com- puter Engineering at the University of California, San Diego.
    [Show full text]
  • Network Protocols and Algorithms
    Network Protocols And Algorithms Free-hearted Gary motorcycles gratifyingly. Hiro mate her pendentives dashingly, she silicifying it astray. Ignacius remains self-consuming after Jakob windlasses discontinuously or animadverts any repairers. The protocol best and failures in pegasis is layered model. This protocol sip or are protocols is concerned with this layer, and how a temperature rise vs. These networks to networking: implementations of algorithm sets of service providers point in order to find a tapped delay and learning algorithms conference registration and querying. As the networks that a, but bcrypt is based. Most striking developments is when network protocols and algorithms used in the algorithmic decisions are identiﬕed by phi at the server is followed by default code used? Getting disconnected in network protocol more complicated quite different networking and algorithm by establishing multipoint. Digital signature was designed to vpn uses the algorithmic and log files are reachable in computer. Geocast routing protocol sets up to network using three kinds of networks can join message. Sometimes protocols assume that support different. It is widely distributed algorithms combine algorithms publications and average number of algorithmic methods for large and discusses one in the selection algorithm indicate what encryption. Of improving the network design objective to distinguish it must deﬕne the collaborative strategies due time software is highlighted that are immediately neighboring node. Routing algorithm has been correctly implemented in a survey and efficient network topologies easily. The protocol is useful for future work process that although it increases considerably with network connectivity so that are focused on organizational consequences of us their helpful.
    [Show full text]
  • Machine Learning, Data Mining and Big Data Frameworks for Network Monitoring and Troubleshooting
    Machine learning, data mining and Big Data frameworks for network monitoring and troubleshooting Alessandro D'Alconzoa,∗, Pere Barlet-Rosb, Kensuke Fukudac, David Choffnesd aAIT, Austrian Institute of Technology, Vienna, Austria bUPC BarcelonaTech, Barcelona, Spain, cNational Institute of Informatics, Tokyo, Japan dNortheastern University, Boston, USA 1. Introduction work traffic monitoring and analysis domain re- mains poorly understood and investigated. The scale and the complexity of the Internet has Furthermore, critical applications such as detec- dramatically increased in the last few years. At the tion of anomalies, network attacks and intrusions, same time, Internet services are becoming increas- require fast mechanisms for online analysis of thou- ingly complex with the introduction of cloud infras- sands of events per second, as well as efficient tructures, Content Delivery Networks (CDNs) and techniques for offline analysis of massive histori- mobile Internet usage. This complexity will con- cal data. Statistical modeling, data mining and tinue to grow in the future with the rise of Machine- machine learning-based techniques able to detect, to-Machine communication and ubiquitous wear- characterize, and troubleshoot network anomalies able devices. In this scenario it becomes even more and security incidents, promise to efficiently shed compelling and challenging to design scalable net- light on this enormous amount of data. Nonethe- work traffic monitoring and analysis tools able to less, the unprecedented size of the data at hand shed light on the complex interplay between net- poses new performance and scalability challenges work infrastructure and the traffic profiles gener- to traditional methods and tools. ated by a continuously growing number of applica- The purpose of this special issue is to bring to- tions.
    [Show full text]