Queueing Behavior and Packet Delays in Network Processor Systems

Total Page:16

File Type:pdf, Size:1020Kb

Queueing Behavior and Packet Delays in Network Processor Systems Queueing Behavior and Packet Delays in Network Processor Systems Jing Fu Olof Hagsand Gunnar Karlsson KTH, Royal Institute of KTH, Royal Institute of KTH, Royal Institute of Technology Technology Technology SE-100 44, Stockholm, SE-100 44, Stockholm, SE-100 44, Stockholm, Sweden Sweden Sweden [email protected] [email protected] [email protected] emerged to provide a flexible router forwarding plane. The goals are to provide the performance of traditional ASICs ABSTRACT and the programmability of general-purpose processors. To Network processor systems provide the performance of ASICs achieve this, programmable processing elements, special pur- combined with the programmability of general-purpose pro- pose hardware and general-purpose CPUs are used to per- cessors. One of the main challenges in designing these sys- form packet processing tasks. In this work, we model a tems is the memory subsystem used when forwarding and router using a network processor system, that is, a system queueing packets. In this work, we study the queueing constituted by line cards built with network processors. behavior and packet delays in a network processor system which works as a router. The study is based on a system Packets may arrive to a network processor system in bursts model that we have introduced and a simulation tool that and be forwarded to the same limited resource (i.e., an out- is constructed according to the model. Using the simula- going interface) causing congestion. Therefore, packets need tion tool, both best-e®ort and di®serv IPv4 forwarding were to be queued at several stages. Thus, designing the mem- modeled and tested using real-world and synthetically gen- ory subsystem and queueing disciplines inside the system erated packet traces. The results on queueing behavior have becomes an important task. been used to dimension various queues, which can be used as guidelines for designing memory subsystems and queueing We extend our earlier work on network processors systems [1] disciplines. In addition, the results on packet delays show by introducing a revised model. The model is capable of that our di®serv setup provides good service di®erentiation modeling a system with multiple line cards, supporting a for best-e®ort and priority packets. The study also reveals variety of parallel processing approaches, queueing disci- that the choice of traces has a large impact on the results plines and forwarding services. Afterwards, we study the when evaluating router and switch architectures. queueing behavior and packet delays using both real-world and synthetically generated packet traces. Our study covers queueing behavior and packet delays of a best-e®ort IPv4 Keywords forwarding service and an IPv4 forwarding service support- network processor, router, queueing behavior ing di®serv [2]. 1. INTRODUCTION The rest of the paper is organized as follows. Section 2 During the recent years, both the Internet tra±c and packet overviews related work. Section 3 presents a model for net- transmission rates have grown rapidly. At the same time, work processor systems. Section 4 presents and character- new Internet services such as VPNs, QoS, and IPTV are izes the packet traces used in the simulations. Section 5 emerging. These trends have implications in the architecture shows the experimental setup. Section 6 presents and ana- of routers. Ideally, a router should process packets at line lyzes the results. Finally, section 7 concludes the paper. rates of high speed, and at the same time be su±ciently programmable and flexible to support current and future Internet services. 2. RELATED WORK There are a variety of studies focusing on investigating the To meet these requirements, network processor systems have performance of network processor systems in various as- pects. These studies are based on analytical models, simu- lations, or real experiments. An example of an analytical model is described in [3], where the design space of a network processor is explored. How- ever, the model is based on a high level of abstraction, where the goal is to quickly identify interesting architec- tures, which may be subject to a more detailed evaluation using simulation. Their ¯nal output is three candidate ar- chitectures, representing cost versus performance tradeo®s. The IETF ForCES (Forwarding and Control Element Sep- route aration) group has de¯ned the ForCES forwarding element processor model [4]. The model provides a general management model Source Sink for diverse forwarding elements including network proces- ingress egress sors. The observation that current network processors are line card 1 line card 1 di±cult to program has influenced the work with NetVM, a Source Sink ingress egress network virtual machine [5]. NetVM models the main com- line card 2 line card 2 ponents of a network processor, and aims at influencing the Source Sink design of next-generation future network processor architec- ingress egress tures giving a more uni¯ed programming interface. line card 3 line card 3 Source Sink Many studies on the Intel IXP 1200 network processor have ingress egress been performed. Spalink et al. demonstrate how to build line card 4 switch line card 4 a software-based router using the IXP 1200 [6]. Their anal- fabric ysis partly focuses on queueing disciplines, which includes packet scheduling, queue contention and port mapping. Lin Figure 1: Network processor system overview. et al. present an implementation and evaluation of di®serv over the IXP 1200 [7]. They have showed in detail the de- sign and implementation of di®serv. The throughput of the 3.1 Network Processor Line Cards flows are measured and the performance bottlenecks of the Processing blocks network processor are identi¯ed. For example, they found Processing blocks are abstractions of processing elements SRAM to be one of the major performance bottlenecks. Pa- (PEs) in a line card. In a block, a program runs on the paefstathiou et al. present how to manage queues in network local processing unit and processes the received packets. A processors [8]. The study is performed both at the IXP 1200 block may need to wait for external access to memory or an and a reference prototype architecture. To summarize, the engine in order to complete, thus reducing the utilization. queueing studies performed at the IXP 1200 are focused on Using several threads increases the utilization by processing technical details, including where and how to queue packets. several packets simultaneously: while one thread is blocked, Still, there are no studies on dimensioning the queues in a another may take over the execution. network processor system based on real-world and synthet- ically generated traces. Engines Finally, in-router queueing behavior and packet delays are Engines are special-purpose hardware available to a network studied in a gateway router of the Sprint IP backbone net- processor that performs speci¯c tasks. They are usually trig- work [9]. The statistics are used to derive a model of router gered by PEs. Examples are TCAM engines and checksum delay performance that accurately predicts packet delays in- engines. side a router. Channels 3. A MODEL FOR NETWORK PROCES- Processing blocks and engines are inter-connected by chan- SOR SYSTEMS nels that represent potential paths for packet transfer. In this section, a model for a network processor system is presented. The major building blocks of the model are line Queues cards using network processors, a switch fabric and a route There are several places in the system where queues are nec- processor. In other words, we have modeled a router using essary. First, packets may arrive to an ingress line card at network processor based line cards. The model is based on a higher rate than the service rate of the line card. Sec- a simpler model, presented in an earlier work [1]. ond, several ingress line cards may simultaneously trans- mit a large number of packets to the same egress line card. The basic building blocks of a network processor line card Third, the introduction of quality of service may cause best- are processing blocks, engines, channels and queues. Such a e®ort packets to be queued when higher priority tra±c is line card can be logically separated into an ingress and an present. egress line card. Fig. 1 shows a network processor system that represents a router with four ingress and four egress In order to make the processing of packets more e±cient, line cards. We assume that there is only one port inside a special meta-packet is created. This meta-packet includes each line card, and packets arriving at this port are ¯rst the packet header, information about the packet and a pointer processed at the ingress line card, and are then transmitted to the actual packet. While the actual packet resides in through the switch fabric to the egress line cards. Based on slower SDRAM, the meta-packet is stored in faster SRAM the queueing discipline, the packets can be queued either at for faster access. This means that an SRAM operation needs the ingress or at the egress line cards. Moreover, there is to be performed when transferring a packet between process- a route processor whose main task is to handle routing and ing blocks, while an SDRAM operation is needed to transmit management protocols. the entire packet over the backplane. In this model, there is no slow-path or terminating tra±c. All queues are formed from meta packets and are FIFOs im- In other words, packets are only sent between line cards, plementing either tail drop or random early discard (RED) and there is no tra±c to or from the route processor. policies. TCAM Egress Line Card TCAM Table 1: Autocorrelation of the FUNET interarrival Source Ethernet Destination Ethernet Source Ethernet Destination Ethernet Sink times.
Recommended publications
  • Instability Phenomena in Underloaded Packet Networks with Qos Schedulers M.Ajmone Marsan, M.Franceschinis, E.Leonardi, F.Neri, A.Tarello
    Instability Phenomena in Underloaded Packet Networks with QoS Schedulers M.Ajmone Marsan, M.Franceschinis, E.Leonardi, F.Neri, A.Tarello Abstract— Instability in packet-switching networks is normally The first results showing that instability can happen also in associated with overload conditions, since queueing network mod- underloaded queueing networks1 started to appear less than a els show that, in simple configurations, only overload generates decade ago [9], [10], when some classes of queueing networks instability. However, some results showing that instability can happen also in underloaded queueing networks appeared in the were identified, for which underload does not automatically recent literature. Underload instabilities can be produced by guarantee stability, i.e., for which the backlog at some queue complex scheduling algorithms, that bear significant resemblance in the network can indefinitely grow also when such queue is to the Quality of Service (QoS) schedulers considered today for not overloaded. It is important to observe that these underload packet networks. In this paper, we study with fluid models and instabilities are often produced either by customer routes that with adversarial queueing theory possible underload instabilities due to strict-priority schedulers and to Generalized Processor visit several times the same queues, or by variations of the Sharing (GPS) schedulers. customer service times at the different queues, or by complex scheduling algorithms. The first hints to the possible connections between underload
    [Show full text]
  • Scalable Multi-Module Packet Switches with Quality of Service
    Scalable Multi-module Packet Switches with Quality of Service Santosh Krishnan Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate School of Arts and Sciences COLUMBIA UNIVERSITY 2006 c 2006 Santosh Krishnan All Rights Reserved ABSTRACT Scalable Multi-module Packet Switches with Quality of Service Santosh Krishnan The rapid growth in packet-based network traffic has resulted in a growing demand for network switches that can scale in capacity with increasing interface transmission rates and higher port counts. Furthermore, the continuing migration of legacy circuit-switched services to a shared IP/MPLS packet-based network requires such network switches to provide an adequate Quality of Service (QoS) in terms of traffic prioritization, as well as bandwidth and delay guarantees. While technology advances, such as the usage of faster silicon and optical switching components, pro- vide one dimension to address this demand, architectural improvements provide the other. This dissertation addresses the latter topic. Specifically, we address the subject of constructing and analyzing high-capacity QoS-capable packet switches using multiple lower-capacity modules. Switches with the output-queueing (OQ) discipline, in theory, provide the best perfor- mance in terms of throughput as well as QoS, but do not scale in capacity with increasing rates and port counts. Input-queued (IQ) switches, on the other hand, scale better but require complex arbitration procedures, sometimes impractical, to achieve the same level of performance. We leverage the state-of-the-art in OQ and IQ switching systems and establish a new taxonomy for a class of three-stage packet switches, which we call Buffered Clos Switches.
    [Show full text]
  • Improving the Scalability of High Performance Computer Systems
    IMPROVING THE SCALABILITY OF HIGH PERFORMANCE COMPUTER SYSTEMS Inauguraldissertation zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften der Universität Mannheim vorgelegt von Heiner Hannes Litz aus Heidelberg (Diplom-Informatiker der Technischen Informatik) Mannheim, 2010 Dekan: Professor Dr. W. Effelsberg, Universität Mannheim Referent: Professor Dr. U. Brüning, Universität Heidelberg Korreferent: Professor Dr. R. Männer, Universität Heidelberg Tag der mündlichen Prüfung: 17. März 2011 Für Regine Abstract Improving the performance of future computing systems will be based upon the ability of increasing the scalability of current technology. New paths need to be explored, as operating principles that were applied up to now are becoming irrelevant for upcoming computer architectures. It appears that scaling the number of cores, processors and nodes within an system represents the only feasible alternative to achieve Exascale performance. To accomplish this goal, we propose three novel techniques addressing different layers of computer systems. The Tightly Coupled Cluster technique significantly improves the communication for inter node communication within compute clusters. By improving the latency by an order of magnitude over existing solutions the cost of communication is considerably reduced. This enables to exploit fine grain parallelism within applications, thereby, extending the scalability considerably. The mechanism virtually moves the network interconnect into the processor, bypassing the latency of the I/O interface and rendering protocol conversions unnecessary. The technique is implemented entirely through firmware and kernel layer software utilizing off-the-shelf AMD processors. We present a proof-of-concept implementation and real world benchmarks to demonstrate the superior performance of our technique. In particular, our approach achieves a software-to- software communication latency of 240 ns between two remote compute nodes.
    [Show full text]
  • Fair Scheduling in Input-Queued Switches Under Inadmissible Traffic
    Fair Scheduling in Input-Queued Switches under Inadmissible Traffic Neha Kumar, Rong Pan, Devavrat Shah Departments of EE & CS Stanford University {nehak, rong, devavrat}@stanford.edu Abstract—In recent years, several high-throughput low-delay scheduling achieve 100% throughput. These results focus on admissible algorithms have been designed for input-queued (IQ) switches, assuming traffic conditions however, when in practice traffic is frequently admissible traffic. In this paper, we focus on queueing systems that violate admissibility criteria. inadmissible. Herein lies our motivation for this paper, where We show that in a single-server system with multiple queues, the Longest we study scheduling policies for inadmissible traffic conditions Queue First (LQF) policy disallows a fair allocation of service rates 1.We in IQ switches. also describe the duality shared by LQF’s rate allocation and a fair rate allocation. In general, we demonstrate that the rate allocation performed Under admissible traffic, stable scheduling algorithms grant by the Maximum Weight Matching (MWM) scheduling algorithm in over- every flow its desired service, and there does not arise a need loaded IQ switches is unfair. We attribute this to the lack of coordination for fairness in rate allocation 4. Under inadmissible traffic, not between admission control and scheduling, and propose fair scheduling al- all flows can receive desired service. We observe the rate allo- gorithms that minimize delay for non-overloaded queues. Keywords— Congestion Control, Quality of Service and Scheduling, cations performed by LQF and MWM in such a scenario, and Stochastic Processes and Queueing Theory, Switches and Switching, Re- prove that they lack fairness.
    [Show full text]
  • Scheduling Algorithm and Evaluating Performance of a Novel 3D-VOQ Switch
    IJCSNS International Journal of Computer Science and Network Security, VOL.6 No.3A, March 2006 25 Scheduling Algorithm and Evaluating Performance of a Novel 3D-VOQ switch Ding-Jyh Tsaur†, Hsuan-Kuei Cheng††, Chia-Lung Liu††, and Woei Lin †† †Chin Min Institute of Technology, Mioali, 351 Taiwan, R.O.C. ††National Chung-Hsing University, Taichung, 402 Taiwan, R.O.C. Summary IQ switching is head-of-line (HOL) blocking, which can This paper studies scheduling algorithms and evaluates the severely affect the throughput. If each input maintains a performance of high-speed switching systems. A novel single FIFO, then HOL blocking can reduce the architecture for three-dimensional Virtual Output Queue (3D- throughput to only 58.6% [10]. Restated, HOL blocking VOQ) switches is proposed with a suitable scheduling algorithm can be eliminated entirely using a method known as virtual to improve the competitive transfer of service. This 3D-VOQ output queueing in which each input maintains a separate switch, which exactly emulates an output-queued switch with a broad class of service scheduling algorithms, requires no speedup, queue for each output. The throughput of an IQ switch can independently of its incoming traffic pattern and switch size. be increased up to 100% for independent arrivals [9]. First, an N×N 3D-VOQ switch is proposed. In this contention- Input buffered switches with VOQ can achieve 100% free architecture, the head-of-line problems are eliminated using throughput [9,11], thus specifying the relationship of a few virtual output queues (VOQ) from input ports and the proper scheduling with high speed.
    [Show full text]
  • On Scheduling Input Queued Cell Switches
    New Jersey Institute of Technology Digital Commons @ NJIT Dissertations Electronic Theses and Dissertations Spring 5-31-1999 On scheduling input queued cell switches Shizhao Li New Jersey Institute of Technology Follow this and additional works at: https://digitalcommons.njit.edu/dissertations Part of the Electrical and Electronics Commons Recommended Citation Li, Shizhao, "On scheduling input queued cell switches" (1999). Dissertations. 986. https://digitalcommons.njit.edu/dissertations/986 This Dissertation is brought to you for free and open access by the Electronic Theses and Dissertations at Digital Commons @ NJIT. It has been accepted for inclusion in Dissertations by an authorized administrator of Digital Commons @ NJIT. For more information, please contact [email protected]. Copyright Warning & Restrictions The copyright law of the United States (Title 17, United States Code) governs the making of photocopies or other reproductions of copyrighted material. Under certain conditions specified in the law, libraries and archives are authorized to furnish a photocopy or other reproduction. One of these specified conditions is that the photocopy or reproduction is not to be “used for any purpose other than private study, scholarship, or research.” If a, user makes a request for, or later uses, a photocopy or reproduction for purposes in excess of “fair use” that user may be liable for copyright infringement, This institution reserves the right to refuse to accept a copying order if, in its judgment, fulfillment of the order would
    [Show full text]
  • Performance Analysis of Networks on Chips
    Performance analysis of networks on chips Citation for published version (APA): Beekhuizen, P. (2010). Performance analysis of networks on chips. Technische Universiteit Eindhoven. https://doi.org/10.6100/IR657033 DOI: 10.6100/IR657033 Document status and date: Published: 01/01/2010 Document Version: Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication: • A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers. Link to publication General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal.
    [Show full text]
  • Switch Kenji Yoshigoe University of South Florida
    University of South Florida Scholar Commons Graduate Theses and Dissertations Graduate School 8-9-2004 Design and Evaluation of the Combined Input and Crossbar Queued (CICQ) Switch Kenji Yoshigoe University of South Florida Follow this and additional works at: https://scholarcommons.usf.edu/etd Part of the American Studies Commons Scholar Commons Citation Yoshigoe, Kenji, "Design and Evaluation of the Combined Input and Crossbar Queued (CICQ) Switch" (2004). Graduate Theses and Dissertations. https://scholarcommons.usf.edu/etd/1313 This Dissertation is brought to you for free and open access by the Graduate School at Scholar Commons. It has been accepted for inclusion in Graduate Theses and Dissertations by an authorized administrator of Scholar Commons. For more information, please contact [email protected]. Design and Evaluation of the Combined Input and Crossbar Queued (CICQ) Switch by Kenji Yoshigoe A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science and Engineering Department of Computer Science and Engineering College of Engineering University of South Florida Major Professor: Kenneth J. Christensen, Ph.D. Tapas K. Das, Ph.D. Miguel A. Labrador, Ph.D. Rafael A. Perez, Ph.D. Stephen W. Suen, Ph.D. Date of Approval: August 9, 2004 Keywords: Performance Evaluation, Packet switches, Variable-length packets, Stability, Scalability © Copyright 2004, Kenji Yoshigoe Acknowledgements I would like to express my gratitude to my advisor Dr. Kenneth J. Christensen for providing me tremendous opportunities and support. He has opened the door for me to pursue an academic career, and has mentored me to a great extent.
    [Show full text]
  • Quantifiable Service Differentiation for Packet Networks
    Quantifiable Service Differentiation for Packet Networks A Dissertation Presented to the Faculty of the School of Engineering and Applied Science University of Virginia In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy Computer Science by Nicolas Christin August 2003 c Copyright 2003 Nicolas Christin All rights reserved Approvals This dissertation is submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Computer Science Nicolas Christin Approved: Jorg¨ Liebeherr (Advisor) John A. Stankovic (Chair) Tarek F. Abdelzaher Stephen D. Patek (Minor Representative) Victor Firoiu Alfred C. Weaver Accepted by the School of Engineering and Applied Science: Richard W. Miksad (Dean) August 2003 Abstract In this dissertation, we present a novel service architecture for the Internet, which reconciles ap- plication demand for strong service guarantees with the need for low computational overhead in network routers. The main contribution of this dissertation is the definition and realization of a new service, called Quantitative Assured Forwarding, which can offer absolute and relative differentia- tion of loss, service rates, and packet delays to classes of traffic. We devise and analyze mechanisms that implement the proposed service, and demonstrate the effectiveness of the approach through analysis, simulation and measurement experiments in a testbed network. To enable the new service, we introduce a set of new traffic control algorithms for network routers. The main mechanism proposed in this dissertation uses a novel technique that performs active buffer management (through dropping of traffic) and rate allocation (for scheduling) in a single step. This is different from prior work which views dropping and scheduling as orthogonal tasks.
    [Show full text]
  • Fabric-On-A-Chip: Toward Consolidating Packet Switching Functions on Silicon
    University of Tennessee, Knoxville TRACE: Tennessee Research and Creative Exchange Doctoral Dissertations Graduate School 12-2007 Fabric-on-a-Chip: Toward Consolidating Packet Switching Functions on Silicon William B. Matthews University of Tennessee - Knoxville Follow this and additional works at: https://trace.tennessee.edu/utk_graddiss Part of the Computer Engineering Commons Recommended Citation Matthews, William B., "Fabric-on-a-Chip: Toward Consolidating Packet Switching Functions on Silicon. " PhD diss., University of Tennessee, 2007. https://trace.tennessee.edu/utk_graddiss/239 This Dissertation is brought to you for free and open access by the Graduate School at TRACE: Tennessee Research and Creative Exchange. It has been accepted for inclusion in Doctoral Dissertations by an authorized administrator of TRACE: Tennessee Research and Creative Exchange. For more information, please contact [email protected]. To the Graduate Council: I am submitting herewith a dissertation written by William B. Matthews entitled "Fabric-on-a- Chip: Toward Consolidating Packet Switching Functions on Silicon." I have examined the final electronic copy of this dissertation for form and content and recommend that it be accepted in partial fulfillment of the equirr ements for the degree of Doctor of Philosophy, with a major in Computer Engineering. Itamar Elhanany, Major Professor We have read this dissertation and recommend its acceptance: Gregory Peterson, Donald W. Bouldin, Bradley Vander Zanden Accepted for the Council: Carolyn R. Hodges Vice Provost and Dean of the Graduate School (Original signatures are on file with official studentecor r ds.) To the Graduate Council: I am submitting herewith a dissertation written by William B. Matthews entitled "Fabric- on-a-Chip: Toward Consolidating Packet Switching Functions on Silicon".
    [Show full text]
  • Data Path Processing in Fast Programmable Routers
    Data Path Processing in Fast Programmable Routers Pradipta De Computer Science Department Stony Brook University Stony Brook, NY 11794-4400 [email protected] Abstract Internet is growing at a fast pace. The link speeds are surging toward 40 Gbps with the emergence of faster link technologies. New applications are coming up which require intelligent processing at the intermediate routers. Switches and routers are becoming the bottlenecks in fast communication. On one hand faster links deliver more packets every second and on the other hand intelligent processing consumes more CPU cycles at the router. The conflicting goals of providing faster but computationally expensive processing call for new approaches in designing routers. This survey takes a look at the core functionalities, like packet classification, buffer memory manage- ment, switch scheduling and output link scheduling performed by a router in its data path processing and discusses the algorithms that aim to reduce the performance bound for these operations. An important requirement for the routers is to provide Quality of Service guarantees. We propose an algorithm to guarantee QoS in Input Queued Routers. The hardware solution to speed up router operation was Application Specific Integrated Circuits (ASICs). But the inherent inflexibility of the method is a demerit as network standards and application requirements are constantly evolving, which seek a faster turnaround time to keep up with the changes. The promise of Network Processors (NP) is the flexibility of general- purpose processors together with the speed of ASICs. We will study the architectural choices for the design of Network Processors and focus on some of the commercially available NPs.
    [Show full text]
  • Msc THESIS Design of a High-Performance Buffered Crossbar Switch Fabric Using Network on Chip
    2008 Computer Engineering Mekelweg 4, 2628 CD Delft The Netherlands http://ce.et.tudelft.nl/ MSc THESIS Design of a High-Performance Buffered Crossbar Switch Fabric Using Network on Chip Iria Varela Sen´ın Abstract High-performance routers constitute the basic building blocks of the Internet. The wide majority of today’s high-performance routers are designed using a crossbar fabric switch as interconnect topology. The buffered crossbar (CICQ) switching architecture is known to be the best crossbar-based architecture for routers design. However, CICQs require expensive on-chip buffers whose cost grows quadratically with CE-MS-2008-19 the router port count. Additionally, they use long wires to connect router inputs to outputs, resulting in non-negligible delays. In this thesis, we propose a novel design for the CICQ switch architecture. We design the whole buffered crossbar fabric as a Network on Chip (NoC). We propose two architectural variants. The first is named the Unidirectional NoC (UDN), where the crossbar core is built using a NoC with input and output ports placed on two opposite sides of the fabric chip. Because of the chip pins layout, we improved the UDN architecture using a Multidirectional NoC (MDN) architecture, by placing the inputs and output around the perimeter (four sides) of the NoC-based crossbar fabric. Both proposed architectures have been analyzed with appropriate routing algorithms for both unicast and multicast traffic conditions. Our results show that the proposed NoC based crossbar switching design outperforms the conventional CICQ architecture. Our designs offers several advantages when compared to traditional CICQ design: 1)Speedup, because short wires allow reliable high-speed signalling, and simple local arbitration per on- chip router.
    [Show full text]