SDN/NFV Control Plane Optimizations for Fixed-Mobile Access and Data Center Optical Networking
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
CS 554 / CSE 512: Parallel Numerical Algorithms Lecture Notes Chapter 1: Parallel Computing
CS 554 / CSE 512: Parallel Numerical Algorithms Lecture Notes Chapter 1: Parallel Computing Michael T. Heath and Edgar Solomonik Department of Computer Science University of Illinois at Urbana-Champaign September 4, 2019 1 Motivation Computational science has driven demands for large-scale machine resources since the early days of com- puting. Supercomputers—collections of machines connected by a network—are used to solve the most computationally expensive problems and drive the frontier capabilities of scientific computing. Leveraging such large machines effectively is a challenge, however, for both algorithms and software systems. Numer- ical algorithms—computational methods for finding approximate solutions to mathematical problems— serve as the basis for computational science. Parallelizing these algorithms is a necessary step for deploying them on larger problems and accelerating the time to solution. Consequently, the interdependence of parallel and numerical computing has shaped the theory and practice of both fields. To study parallel numerical algo- rithms, we will first aim to establish a basic understanding of parallel computers and formulate a theoretical model for the scalability of parallel algorithms. 1.1 Need for Parallelism Connecting the latest technology chips via networks so that they can work in concert to solve larger problems has long been the method of choice for solving problems beyond the capabilities of a single computer. Improvements in microprocessor technology, hardware logic, and algorithms have expanded the scope of problems that can be solved effectively on a sequential computer. These improvements face fundamental physical barriers, however, so that parallelism has prominently made its way on-chip. Increasing the number of processing cores on a chip is now the dominant source of hardware-level performance improvement. -
Principles of Distributed Computing
ii !"#$!%& ''''()*+$!,-' Principles of Distributed Computing Roger Wattenhofer [email protected] Spring 2016 iv CONTENTS 7.4 Applications . 80 8 Locality Lower Bounds 85 8.1 Model . 85 8.2 Locality . 86 8.3 The Neighborhood Graph . 88 Contents 9 Social Networks 93 9.1 Small World Networks . 94 9.2 Propagation Studies . 100 10 Synchronization 105 1 Vertex Coloring 5 10.1 Basics . 105 1.1 Problem & Model . .5 10.2 Synchronizer α ............................ 106 1.2 Coloring Trees . .8 10.3 Synchronizer β ............................ 107 2 Tree Algorithms 15 10.4 Synchronizer γ ............................ 108 2.1 Broadcast . 15 10.5 Network Partition . 110 2.2 Convergecast . 17 10.6 Clock Synchronization . 112 2.3 BFS Tree Construction . 18 2.4 MST Construction . 19 11 Communication Complexity 119 11.1 Diameter & APSP . 119 3 Leader Election 23 11.2 Lower Bound Graphs . 121 3.1 Anonymous Leader Election . 23 11.3 Communication Complexity . 124 3.2 Asynchronous Ring . 24 11.4 Distributed Complexity Theory . 129 3.3 Lower Bounds . 26 3.4 Synchronous Ring . 29 12 Wireless Protocols 133 12.1 Basics . 133 4 Distributed Sorting 33 12.2 Initialization . 135 4.1 Array & Mesh . 33 4.2 Sorting Networks . 36 12.2.1 Non-Uniform Initialization . 135 4.3 Counting Networks . 39 12.2.2 Uniform Initialization with CD . 135 12.2.3 Uniform Initialization without CD . 137 5 Shared Memory 47 12.3 Leader Election . 137 5.1 Model . 47 12.3.1 With High Probability . 137 5.2 Mutual Exclusion . 48 12.3.2 Uniform Leader Election . 138 5.3 Store & Collect . -
Chapter 1: Parallel Computing
CS 554 / CSE 512: Parallel Numerical Algorithms Lecture Notes Chapter 1: Parallel Computing Michael T. Heath and Edgar Solomonik Department of Computer Science University of Illinois at Urbana-Champaign November 10, 2017 1 Motivation Computational science has driven demands for large-scale machine resources since the early days of com- puting. Supercomputers—collections of machines connected by a network—are used to solve the most computationally expensive problems and drive the frontier capabilities of scientific computing. Leveraging such large machines effectively is a challenge, however, for both algorithms and software systems. Numer- ical algorithms—computational methods for finding approximate solutions to mathematical problems— serve as the basis for computational science. Parallelizing these algorithms is a necessary step for deploying them on larger problems and accelerating the time to solution. Consequently, the interdependence of parallel and numerical computing has shaped the theory and practice of both fields. To study parallel numerical algo- rithms, we will first aim to establish a basic understanding of parallel computers and formulate a theoretical model for the scalability of parallel algorithms. 1.1 Need for Parallelism Connecting the latest technology chips via networks so that they can work in concert to solve larger problems has long been the method of choice for solving problems beyond the capabilities of a single computer. Improvements in microprocessor technology, hardware logic, and algorithms have expanded the scope of problems that can be solved effectively on a sequential computer. These improvements face fundamental physical barriers, however, so that parallelism has prominently made its way on-chip. Increasing the number of processing cores on a chip is now the dominant source of hardware-level performance improvement. -
Distributed Computing
!"#$!%& ''''()*+$!,-' Principles of Distributed Computing Roger Wattenhofer [email protected] Spring 2014 Contents 1 Vertex Coloring 5 1.1 Problem & Model . .5 1.2 Coloring Trees . .8 2 Leader Election 15 2.1 Anonymous Leader Election . 15 2.2 Asynchronous Ring . 16 2.3 Lower Bounds . 18 2.4 Synchronous Ring . 21 3 Tree Algorithms 23 3.1 Broadcast . 23 3.2 Convergecast . 24 3.3 BFS Tree Construction . 25 3.4 MST Construction . 27 4 Distributed Sorting 31 4.1 Array & Mesh . 31 4.2 Sorting Networks . 34 4.3 Counting Networks . 37 5 Shared Memory 45 5.1 Introduction . 45 5.2 Mutual Exclusion . 46 5.3 Store & Collect . 49 5.3.1 Problem Definition . 49 5.3.2 Splitters . 50 5.3.3 Binary Splitter Tree . 51 5.3.4 Splitter Matrix . 53 6 Shared Objects 57 6.1 Introduction . 57 6.2 Arrow and Friends . 58 6.3 Ivy and Friends . 63 7 Maximal Independent Set 69 7.1 MIS . 69 7.2 Original Fast MIS . 71 7.3 Fast MIS v2 . 74 i ii CONTENTS 7.4 Applications . 78 8 Locality Lower Bounds 83 8.1 Locality . 84 8.2 The Neighborhood Graph . 86 9 Social Networks 91 9.1 Small World Networks . 92 9.2 Propagation Studies . 98 10 Synchronization 101 10.1 Basics . 101 10.2 Synchronizer α ............................ 102 10.3 Synchronizer β ............................ 103 10.4 Synchronizer γ ............................ 104 10.5 Network Partition . 106 10.6 Clock Synchronization . 108 11 Hard Problems 115 11.1 Diameter & APSP . 115 11.2 Lower Bound Graphs . 117 11.3 Communication Complexity .