Performance Analysis of a Robust Scheduling Algorithm for Scalable Input-Queued Switches

Total Page:16

File Type:pdf, Size:1020Kb

Performance Analysis of a Robust Scheduling Algorithm for Scalable Input-Queued Switches Performance Analysis of a Robust Scheduling Algorithm for Scalable Input-Queued Switches Itamar Elhanany†, Dan Sadot Department of Electrical and Computer Engineering Ben-Gurion University, Israel †Email: [email protected] Abstract— In this paper a high-performance, robust and scalable introduced such that the internal fabric bandwidth is S times scheduling algorithm for input-queued switches, called higher than that of incoming traffic, where if N is the number distributed sequential allocation (DISA), is presented and of ports then 1<S<N. A speedup of N yields performance analyzed. Contrary to pointer-based arbitration schemes, the equivalent to an output queued switch. However, speeding up algorithm proposed is based on a synchronized channel the fabric carries enormous implementation ramifications that reservation cycle whereby each input selects a designated output, motivate the search for switching architectures and scheduling considering both its local requests as well as global channel algorithms that would overcome the need for a large speedup. availability information. The distinctiveness of the algorithm is in An additional drawback of many pointer-based schemes is the its ability to offer high-performance when multiple cells are connectivity complexity which is known to be O(N2). transmitted within each switching intervals. The subsequent Consequently, these algorithms have been proposed primarily relaxed switching-time requirement allows for the utilization of ≤ commercially available crosspoint switches, yielding a pragmatic for switches with small number of ports (i.e. N 32) [3], and are and scalable solution for high port-density switching platforms. generally unsuitable for next-generation switches where The efficiency of the algorithms and its robustness is established hundreds and even thousands of ports are considered. through analysis and computer simulations addressing diverse In this paper we proposed a scheduling algorithm called traffic scenarios. distributed sequential allocation (DISA) that represents a shift Keywords- Packet scheduling algorithms, Switch fabric, Input- from legacy scheduling schemes, by constituting a non-pointer queued switches, Non-uniform destination distribution based approach in which the contention resolution process takes into consideration both local and global resource availability [6]. Moreover, the DISA algorithm requires only I. INTRODUCTION O(NlogN) connectivity and no speedup. Scalable packet scheduling algorithms that offer high- This paper is organized as follows. Section II describes the performance for input buffered switches and routers have been DISA scheduling algorithm and a complementing switch the focus of many academic and industry studies in recent architecture. Section III outlines the queueing notation and years. The ever growing need for additional bandwidth and model formulations utilized throughout the paper for more sophisticated service provisioning in next generation performance analysis. Section IV presents analysis for networks requires the introduction of scalable packet Bernoulli i.i.d. uniformly distributed arrival patterns, while scheduling solutions that go beyond legacy schemes. As port sections V focuses on analysis of non-uniform destination densities and line rates increase, input buffered switch distribution. In section VI performance results under correlated architectures are acknowledged as a pragmatic approach for arrivals are shown, while in section VII the conclusions are implementing scalable switches and routers. In these drawn. architectures, arriving packets or cells are stored at queues at the ingress port until scheduled for transmission. With the II. THE DISA SCHEDULING ALGORITHM introduction of commercially available high port-density crosspoint switches ([1], [2]), it has become more compelling A. Switch Architecture to propose packet scheduling techniques that can control crosspoint switches to offer a low chip-count, high- Figure 1 shows a diagram of the proposed switch fabric performance and cost-efficient switch fabric solutions. architecture. Packets received from the switch port at each line card flow into a network processor or traffic manager. The The majority of the proposed scheduling algorithms processed packets are subsequently forwarded to an ingress targeting high port density switches are based on an iterative fabric interface device which typically segments the packets request-accept-grant process [3], [4], [5]. Although such into smaller fixed-size cells and provides a buffering stage to algorithms offer ease of implementation, they typically the fabric. In order to avoid head-of-line blocking [7], a experience degradation in performance at high loads in cases queueing technique called virtual output queueing (VOQ) is where the traffic is correlated or non-uniformly distributed deployed whereby a unique queue is maintained at the ingress between the destinations. As means of overcoming the inherent for each of the outputs. We interchange the terns channel and limitations of pointer-based schemes, speedup is usually output in the context of allocation by an input port. ODS (ϕ) High-speed N Serial Links ϕ ϕ ϕ Linecard 1 1 2 N 1 Switch Port #1 packet Buffering (VOQ) w1 w2 wN processing & Arbitration Crosspoint W engine Module SwitchesCrosspointN x N Switches Crosspoint W-bit W-bit W-bit . Switches . AND AND AND Linecard N . Switch Port #N packet Buffering (VOQ) Central processing & Arbitration Arbitration Unit Queue Selection Logic (QSL) engine Module (CAU) Reserved N Offered Destination Bus (ODS) N Output N Figure 1. The proposed crosspoint-based, single-stage switch fabric architecture. N-bit OR The efficiency of the scheduler has a paramount impact on New ODS (ϕ*) the amount of buffering required at the linecards and, consequently, the latency through the system. Two types of control links are utilized in the proposed architecture between Figure 2. The output allocation function performed by each ingress port. the nodes and the fabric. The first is an N-bit bus called the offered destination set (ODS), which all nodes have read and write access to, indicating the reservation status of each of the At the initial step, the ODS lines either grant or discard N outputs. We denote a logical ‘1’ on the ODS as representing queue requests using a layer of W-bit AND gates, where W is an available output port while a logical ‘0’ a previously the number of bits per weight. As a result, only weights reserved one. In addition to the ODS, each node receives a corresponding to available outputs advance to the next level. dedicated signal from a central arbitration unit (CAU) which The following step is comprised of a queue selection logic indicates when the node is to perform reservation, as will be (QSL) block which determines the output to be reserved. In the clarified in the following section. The switching interval is a single-bit-weight case, this level may be implemented using a fixed multiple of the time slot duration, where a time slot randomized priority decoder circuit or if queue sizes are represents a single cell time. considered as weights then a maximal value circuit should be deployed. The output of the QSL is an N-bit vector with at B. Scheduling Algorithm most a single bit set to ‘1’ denoting the index of the selected output. This word is inserted into a N-bit OR gate together with The proposed scheduling algorithm is carried out as * sequential selection process. For each switching interval, which ϕ to yield ϕ - the new ODS. In typical staged designs, the may span over one or more time slots, a single output is propagation delay of such QSL circuits is of O(logdN·tc), where allocated by each of the ports. Using the dedicated links d is the number of comparisons performed in each level and tc between the CAU and the ports, the CAU signals each port, in is the propagation delay contributed by each comparison block. turn, to perform allocation. A random order of signaling the Using current 0.13µ CMOS VLSI technology, for 128 ports ports is drawn at the beginning of each interval. Once the last with 2 comparisons per level and tc~500psec, the critical path port completes allocation, the crosspoint switches are duration is approximately 3.5nsec. configured and each port transmits cells to its designated output Upon completion of allocation by the first input node, the in accordance with the selected configuration while, CAU randomly signals the next node to commence output concurrently, a new allocations interval (for the following allocation. That node will perform the same allocation process transmission cycle) begins. Since transmission and scheduling only with the new ODS as a mask of previously allocated are carried out simultaneously, there is no transmission “dead- outputs. Contention is inherently avoided since at any given time” involved,. time only one node attempts to reserve an output. After all N In figure 2, a block diagram of the channel allocation nodes perform reservation, the CAU configures the crosspoint scheme performed by each node is depicted. We let wj denote switches and data traverses the fabric. When multiple classes of th the weight for queue j within the VOQ, and ϕi the i bit service are deployed, a different queue (and hence a unique element in the ODS indicating the availability status of output weight) is associated with each class of service. A basic i. Upon receiving a signal from the CAU, each node performs precondition for supporting the analysis is that the algorithms is output reservation according to two primary guidelines: (a) stable. To that end, we have the following theorem addressing global switch resources status, i.e., available outputs and (b) the issue of stability: local considerations reflected by the priorities map of the Theorem 1: (Stability) The DISA scheduling algorithm with no node's queues. We note that as will be elaborated in subsequent speedup is strictly stable for any admissible, uniformly sections, the term queue weight may refer either to a single bit distributed traffic. denoting queue occupancy state (empty or non-empty), or a value corresponding to the queue size.
Recommended publications
  • Instability Phenomena in Underloaded Packet Networks with Qos Schedulers M.Ajmone Marsan, M.Franceschinis, E.Leonardi, F.Neri, A.Tarello
    Instability Phenomena in Underloaded Packet Networks with QoS Schedulers M.Ajmone Marsan, M.Franceschinis, E.Leonardi, F.Neri, A.Tarello Abstract— Instability in packet-switching networks is normally The first results showing that instability can happen also in associated with overload conditions, since queueing network mod- underloaded queueing networks1 started to appear less than a els show that, in simple configurations, only overload generates decade ago [9], [10], when some classes of queueing networks instability. However, some results showing that instability can happen also in underloaded queueing networks appeared in the were identified, for which underload does not automatically recent literature. Underload instabilities can be produced by guarantee stability, i.e., for which the backlog at some queue complex scheduling algorithms, that bear significant resemblance in the network can indefinitely grow also when such queue is to the Quality of Service (QoS) schedulers considered today for not overloaded. It is important to observe that these underload packet networks. In this paper, we study with fluid models and instabilities are often produced either by customer routes that with adversarial queueing theory possible underload instabilities due to strict-priority schedulers and to Generalized Processor visit several times the same queues, or by variations of the Sharing (GPS) schedulers. customer service times at the different queues, or by complex scheduling algorithms. The first hints to the possible connections between underload
    [Show full text]
  • Scalable Multi-Module Packet Switches with Quality of Service
    Scalable Multi-module Packet Switches with Quality of Service Santosh Krishnan Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate School of Arts and Sciences COLUMBIA UNIVERSITY 2006 c 2006 Santosh Krishnan All Rights Reserved ABSTRACT Scalable Multi-module Packet Switches with Quality of Service Santosh Krishnan The rapid growth in packet-based network traffic has resulted in a growing demand for network switches that can scale in capacity with increasing interface transmission rates and higher port counts. Furthermore, the continuing migration of legacy circuit-switched services to a shared IP/MPLS packet-based network requires such network switches to provide an adequate Quality of Service (QoS) in terms of traffic prioritization, as well as bandwidth and delay guarantees. While technology advances, such as the usage of faster silicon and optical switching components, pro- vide one dimension to address this demand, architectural improvements provide the other. This dissertation addresses the latter topic. Specifically, we address the subject of constructing and analyzing high-capacity QoS-capable packet switches using multiple lower-capacity modules. Switches with the output-queueing (OQ) discipline, in theory, provide the best perfor- mance in terms of throughput as well as QoS, but do not scale in capacity with increasing rates and port counts. Input-queued (IQ) switches, on the other hand, scale better but require complex arbitration procedures, sometimes impractical, to achieve the same level of performance. We leverage the state-of-the-art in OQ and IQ switching systems and establish a new taxonomy for a class of three-stage packet switches, which we call Buffered Clos Switches.
    [Show full text]
  • Improving the Scalability of High Performance Computer Systems
    IMPROVING THE SCALABILITY OF HIGH PERFORMANCE COMPUTER SYSTEMS Inauguraldissertation zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften der Universität Mannheim vorgelegt von Heiner Hannes Litz aus Heidelberg (Diplom-Informatiker der Technischen Informatik) Mannheim, 2010 Dekan: Professor Dr. W. Effelsberg, Universität Mannheim Referent: Professor Dr. U. Brüning, Universität Heidelberg Korreferent: Professor Dr. R. Männer, Universität Heidelberg Tag der mündlichen Prüfung: 17. März 2011 Für Regine Abstract Improving the performance of future computing systems will be based upon the ability of increasing the scalability of current technology. New paths need to be explored, as operating principles that were applied up to now are becoming irrelevant for upcoming computer architectures. It appears that scaling the number of cores, processors and nodes within an system represents the only feasible alternative to achieve Exascale performance. To accomplish this goal, we propose three novel techniques addressing different layers of computer systems. The Tightly Coupled Cluster technique significantly improves the communication for inter node communication within compute clusters. By improving the latency by an order of magnitude over existing solutions the cost of communication is considerably reduced. This enables to exploit fine grain parallelism within applications, thereby, extending the scalability considerably. The mechanism virtually moves the network interconnect into the processor, bypassing the latency of the I/O interface and rendering protocol conversions unnecessary. The technique is implemented entirely through firmware and kernel layer software utilizing off-the-shelf AMD processors. We present a proof-of-concept implementation and real world benchmarks to demonstrate the superior performance of our technique. In particular, our approach achieves a software-to- software communication latency of 240 ns between two remote compute nodes.
    [Show full text]
  • Fair Scheduling in Input-Queued Switches Under Inadmissible Traffic
    Fair Scheduling in Input-Queued Switches under Inadmissible Traffic Neha Kumar, Rong Pan, Devavrat Shah Departments of EE & CS Stanford University {nehak, rong, devavrat}@stanford.edu Abstract—In recent years, several high-throughput low-delay scheduling achieve 100% throughput. These results focus on admissible algorithms have been designed for input-queued (IQ) switches, assuming traffic conditions however, when in practice traffic is frequently admissible traffic. In this paper, we focus on queueing systems that violate admissibility criteria. inadmissible. Herein lies our motivation for this paper, where We show that in a single-server system with multiple queues, the Longest we study scheduling policies for inadmissible traffic conditions Queue First (LQF) policy disallows a fair allocation of service rates 1.We in IQ switches. also describe the duality shared by LQF’s rate allocation and a fair rate allocation. In general, we demonstrate that the rate allocation performed Under admissible traffic, stable scheduling algorithms grant by the Maximum Weight Matching (MWM) scheduling algorithm in over- every flow its desired service, and there does not arise a need loaded IQ switches is unfair. We attribute this to the lack of coordination for fairness in rate allocation 4. Under inadmissible traffic, not between admission control and scheduling, and propose fair scheduling al- all flows can receive desired service. We observe the rate allo- gorithms that minimize delay for non-overloaded queues. Keywords— Congestion Control, Quality of Service and Scheduling, cations performed by LQF and MWM in such a scenario, and Stochastic Processes and Queueing Theory, Switches and Switching, Re- prove that they lack fairness.
    [Show full text]
  • Scheduling Algorithm and Evaluating Performance of a Novel 3D-VOQ Switch
    IJCSNS International Journal of Computer Science and Network Security, VOL.6 No.3A, March 2006 25 Scheduling Algorithm and Evaluating Performance of a Novel 3D-VOQ switch Ding-Jyh Tsaur†, Hsuan-Kuei Cheng††, Chia-Lung Liu††, and Woei Lin †† †Chin Min Institute of Technology, Mioali, 351 Taiwan, R.O.C. ††National Chung-Hsing University, Taichung, 402 Taiwan, R.O.C. Summary IQ switching is head-of-line (HOL) blocking, which can This paper studies scheduling algorithms and evaluates the severely affect the throughput. If each input maintains a performance of high-speed switching systems. A novel single FIFO, then HOL blocking can reduce the architecture for three-dimensional Virtual Output Queue (3D- throughput to only 58.6% [10]. Restated, HOL blocking VOQ) switches is proposed with a suitable scheduling algorithm can be eliminated entirely using a method known as virtual to improve the competitive transfer of service. This 3D-VOQ output queueing in which each input maintains a separate switch, which exactly emulates an output-queued switch with a broad class of service scheduling algorithms, requires no speedup, queue for each output. The throughput of an IQ switch can independently of its incoming traffic pattern and switch size. be increased up to 100% for independent arrivals [9]. First, an N×N 3D-VOQ switch is proposed. In this contention- Input buffered switches with VOQ can achieve 100% free architecture, the head-of-line problems are eliminated using throughput [9,11], thus specifying the relationship of a few virtual output queues (VOQ) from input ports and the proper scheduling with high speed.
    [Show full text]
  • On Scheduling Input Queued Cell Switches
    New Jersey Institute of Technology Digital Commons @ NJIT Dissertations Electronic Theses and Dissertations Spring 5-31-1999 On scheduling input queued cell switches Shizhao Li New Jersey Institute of Technology Follow this and additional works at: https://digitalcommons.njit.edu/dissertations Part of the Electrical and Electronics Commons Recommended Citation Li, Shizhao, "On scheduling input queued cell switches" (1999). Dissertations. 986. https://digitalcommons.njit.edu/dissertations/986 This Dissertation is brought to you for free and open access by the Electronic Theses and Dissertations at Digital Commons @ NJIT. It has been accepted for inclusion in Dissertations by an authorized administrator of Digital Commons @ NJIT. For more information, please contact [email protected]. Copyright Warning & Restrictions The copyright law of the United States (Title 17, United States Code) governs the making of photocopies or other reproductions of copyrighted material. Under certain conditions specified in the law, libraries and archives are authorized to furnish a photocopy or other reproduction. One of these specified conditions is that the photocopy or reproduction is not to be “used for any purpose other than private study, scholarship, or research.” If a, user makes a request for, or later uses, a photocopy or reproduction for purposes in excess of “fair use” that user may be liable for copyright infringement, This institution reserves the right to refuse to accept a copying order if, in its judgment, fulfillment of the order would
    [Show full text]
  • Performance Analysis of Networks on Chips
    Performance analysis of networks on chips Citation for published version (APA): Beekhuizen, P. (2010). Performance analysis of networks on chips. Technische Universiteit Eindhoven. https://doi.org/10.6100/IR657033 DOI: 10.6100/IR657033 Document status and date: Published: 01/01/2010 Document Version: Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication: • A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers. Link to publication General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal.
    [Show full text]
  • Switch Kenji Yoshigoe University of South Florida
    University of South Florida Scholar Commons Graduate Theses and Dissertations Graduate School 8-9-2004 Design and Evaluation of the Combined Input and Crossbar Queued (CICQ) Switch Kenji Yoshigoe University of South Florida Follow this and additional works at: https://scholarcommons.usf.edu/etd Part of the American Studies Commons Scholar Commons Citation Yoshigoe, Kenji, "Design and Evaluation of the Combined Input and Crossbar Queued (CICQ) Switch" (2004). Graduate Theses and Dissertations. https://scholarcommons.usf.edu/etd/1313 This Dissertation is brought to you for free and open access by the Graduate School at Scholar Commons. It has been accepted for inclusion in Graduate Theses and Dissertations by an authorized administrator of Scholar Commons. For more information, please contact [email protected]. Design and Evaluation of the Combined Input and Crossbar Queued (CICQ) Switch by Kenji Yoshigoe A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science and Engineering Department of Computer Science and Engineering College of Engineering University of South Florida Major Professor: Kenneth J. Christensen, Ph.D. Tapas K. Das, Ph.D. Miguel A. Labrador, Ph.D. Rafael A. Perez, Ph.D. Stephen W. Suen, Ph.D. Date of Approval: August 9, 2004 Keywords: Performance Evaluation, Packet switches, Variable-length packets, Stability, Scalability © Copyright 2004, Kenji Yoshigoe Acknowledgements I would like to express my gratitude to my advisor Dr. Kenneth J. Christensen for providing me tremendous opportunities and support. He has opened the door for me to pursue an academic career, and has mentored me to a great extent.
    [Show full text]
  • Quantifiable Service Differentiation for Packet Networks
    Quantifiable Service Differentiation for Packet Networks A Dissertation Presented to the Faculty of the School of Engineering and Applied Science University of Virginia In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy Computer Science by Nicolas Christin August 2003 c Copyright 2003 Nicolas Christin All rights reserved Approvals This dissertation is submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Computer Science Nicolas Christin Approved: Jorg¨ Liebeherr (Advisor) John A. Stankovic (Chair) Tarek F. Abdelzaher Stephen D. Patek (Minor Representative) Victor Firoiu Alfred C. Weaver Accepted by the School of Engineering and Applied Science: Richard W. Miksad (Dean) August 2003 Abstract In this dissertation, we present a novel service architecture for the Internet, which reconciles ap- plication demand for strong service guarantees with the need for low computational overhead in network routers. The main contribution of this dissertation is the definition and realization of a new service, called Quantitative Assured Forwarding, which can offer absolute and relative differentia- tion of loss, service rates, and packet delays to classes of traffic. We devise and analyze mechanisms that implement the proposed service, and demonstrate the effectiveness of the approach through analysis, simulation and measurement experiments in a testbed network. To enable the new service, we introduce a set of new traffic control algorithms for network routers. The main mechanism proposed in this dissertation uses a novel technique that performs active buffer management (through dropping of traffic) and rate allocation (for scheduling) in a single step. This is different from prior work which views dropping and scheduling as orthogonal tasks.
    [Show full text]
  • Fabric-On-A-Chip: Toward Consolidating Packet Switching Functions on Silicon
    University of Tennessee, Knoxville TRACE: Tennessee Research and Creative Exchange Doctoral Dissertations Graduate School 12-2007 Fabric-on-a-Chip: Toward Consolidating Packet Switching Functions on Silicon William B. Matthews University of Tennessee - Knoxville Follow this and additional works at: https://trace.tennessee.edu/utk_graddiss Part of the Computer Engineering Commons Recommended Citation Matthews, William B., "Fabric-on-a-Chip: Toward Consolidating Packet Switching Functions on Silicon. " PhD diss., University of Tennessee, 2007. https://trace.tennessee.edu/utk_graddiss/239 This Dissertation is brought to you for free and open access by the Graduate School at TRACE: Tennessee Research and Creative Exchange. It has been accepted for inclusion in Doctoral Dissertations by an authorized administrator of TRACE: Tennessee Research and Creative Exchange. For more information, please contact [email protected]. To the Graduate Council: I am submitting herewith a dissertation written by William B. Matthews entitled "Fabric-on-a- Chip: Toward Consolidating Packet Switching Functions on Silicon." I have examined the final electronic copy of this dissertation for form and content and recommend that it be accepted in partial fulfillment of the equirr ements for the degree of Doctor of Philosophy, with a major in Computer Engineering. Itamar Elhanany, Major Professor We have read this dissertation and recommend its acceptance: Gregory Peterson, Donald W. Bouldin, Bradley Vander Zanden Accepted for the Council: Carolyn R. Hodges Vice Provost and Dean of the Graduate School (Original signatures are on file with official studentecor r ds.) To the Graduate Council: I am submitting herewith a dissertation written by William B. Matthews entitled "Fabric- on-a-Chip: Toward Consolidating Packet Switching Functions on Silicon".
    [Show full text]
  • Data Path Processing in Fast Programmable Routers
    Data Path Processing in Fast Programmable Routers Pradipta De Computer Science Department Stony Brook University Stony Brook, NY 11794-4400 [email protected] Abstract Internet is growing at a fast pace. The link speeds are surging toward 40 Gbps with the emergence of faster link technologies. New applications are coming up which require intelligent processing at the intermediate routers. Switches and routers are becoming the bottlenecks in fast communication. On one hand faster links deliver more packets every second and on the other hand intelligent processing consumes more CPU cycles at the router. The conflicting goals of providing faster but computationally expensive processing call for new approaches in designing routers. This survey takes a look at the core functionalities, like packet classification, buffer memory manage- ment, switch scheduling and output link scheduling performed by a router in its data path processing and discusses the algorithms that aim to reduce the performance bound for these operations. An important requirement for the routers is to provide Quality of Service guarantees. We propose an algorithm to guarantee QoS in Input Queued Routers. The hardware solution to speed up router operation was Application Specific Integrated Circuits (ASICs). But the inherent inflexibility of the method is a demerit as network standards and application requirements are constantly evolving, which seek a faster turnaround time to keep up with the changes. The promise of Network Processors (NP) is the flexibility of general- purpose processors together with the speed of ASICs. We will study the architectural choices for the design of Network Processors and focus on some of the commercially available NPs.
    [Show full text]
  • Msc THESIS Design of a High-Performance Buffered Crossbar Switch Fabric Using Network on Chip
    2008 Computer Engineering Mekelweg 4, 2628 CD Delft The Netherlands http://ce.et.tudelft.nl/ MSc THESIS Design of a High-Performance Buffered Crossbar Switch Fabric Using Network on Chip Iria Varela Sen´ın Abstract High-performance routers constitute the basic building blocks of the Internet. The wide majority of today’s high-performance routers are designed using a crossbar fabric switch as interconnect topology. The buffered crossbar (CICQ) switching architecture is known to be the best crossbar-based architecture for routers design. However, CICQs require expensive on-chip buffers whose cost grows quadratically with CE-MS-2008-19 the router port count. Additionally, they use long wires to connect router inputs to outputs, resulting in non-negligible delays. In this thesis, we propose a novel design for the CICQ switch architecture. We design the whole buffered crossbar fabric as a Network on Chip (NoC). We propose two architectural variants. The first is named the Unidirectional NoC (UDN), where the crossbar core is built using a NoC with input and output ports placed on two opposite sides of the fabric chip. Because of the chip pins layout, we improved the UDN architecture using a Multidirectional NoC (MDN) architecture, by placing the inputs and output around the perimeter (four sides) of the NoC-based crossbar fabric. Both proposed architectures have been analyzed with appropriate routing algorithms for both unicast and multicast traffic conditions. Our results show that the proposed NoC based crossbar switching design outperforms the conventional CICQ architecture. Our designs offers several advantages when compared to traditional CICQ design: 1)Speedup, because short wires allow reliable high-speed signalling, and simple local arbitration per on- chip router.
    [Show full text]