A Network Processor Architecture for High Speed Carrier Grade Ethernet Networks
Total Page:16
File Type:pdf, Size:1020Kb
TECHNISCHE UNIVERSITAT¨ MUNCHEN¨ Lehrstuhl f¨urIntegrierte Systeme A Network Processor Architecture for High Speed Carrier Grade Ethernet Networks Kimon Karras Vollst¨andigerAbdruck der von der Fakult¨atf¨urElektrotechnik und Informationstechnik der Technischen Universit¨atM¨unchen zur Erlangung des akademischen Grades eines Doktor-Ingenieurs (Dr.-Ing.) genehmigten Dissertation. Vorsitzender: Univ.-Prof. Dr.-Ing. Wolfgang Kellerer Pr¨uferder Dissertation: 1. Univ.-Prof. Dr. sc.techn. Andreas Herkersdorf 2. Univ.-Prof. Dr.-Ing. Andreas Kirst¨adter,Universit¨atStuttgart Die Dissertation wurde am 13.05.2013 bei der Technischen Universit¨atM¨unchen einge- reicht und durch die Fakult¨atf¨urElektrotechnik und Informationstechnik am 24.03.2014 angenommen. Preface This work was performed at the Institute for Integrated Systems of the Technical University of Munich under the auspices of Prof. Dr. sc.techn. Andreas Herkersdorf and Dr.-Ing. Thomas Wild, both of whom I have to whole-heartedly thank for giving me the opportunity of working with them and for their invaluable help and guidance throughout the five years of my stay at the institute. A further thanks goes to all the people at the institute with whom I've cooperated over the years and who have made life enjoyable both on a professional and on a personal level. A very great thanks is due to Daniel Llorente for keeping me company and bearing with me during most of this time. His contribution to this thesis is immense and without him it wouldn't have been possible. Muchas gracias, companero~ ! Furthermore I would like to thank all the partner companies and participants of the 100GET project, which formed the basis for this disseration. It was a pleasure to work with all of you on such an interesting project and to exchange ideas with such a distiguished crowd. Special credit is due to Dr.-Ing. Thomas Michaelis, who coordinated the 100GET- E3 subproject masterfully and without whom reaching the project goals would have been that much more difficult. Gratitude is always owned to all my friends in Munich, in no particular order, Iason Vittorias, Akis Kipouridis, Harry Daniilidis, Christian Fuchs, Theodoros Papadopoulos, Panayiotis Mitsakis, Tilemachos Matiakis and Nana Kouri, as well as many others who made sure that all work and no play didn't make Jack a dull boy after all. These five years have been some of the most amazing thus far and I have all of you to thank for it. And finally, last but not least, Katerini Papathanasiou for lending a shoulder upon which to lean on, in order to see this dissertation through. Munich, March 2013 Kimon Karras iii Abstract Next generation core networks will raise the performance bar for processing modules to at least 100 Gbps throughput on a single port. Currently Network Processor (NP) vendors are dealing with higher throughput demands, by increasing the number of processing cores in their design. The NPU100 is a novel network processor architecture aimed at satisfying the packet processing requirements of 100 Gbps Carrier Grade Ethernet (CGE) networks, while reducing power consumption in comparison to available architecture paradigms. To accomplish this, a multi-layered, deeply pipelined and programmable architecture was proposed and implemented as an FPGA prototype. The NPU 100 architecture can be extended and tailored to the demands of various layer 2.5 network architectures. Conventional programmable approaches, either based on massively parallel general pur- pose CPUs or multiple lower rate NPs, do not scale towards high speed carrier networks from a cost, performance (packet throughput) and power dissipation perspective. Archi- tecture design and development of an integrated, programmable component that satisfies these requirements is a scientific and technical challenge. The NPU100 architecture is based on a pipelined approach because of its inherently pre- dictable behavior and scalable performance. In order to achieve modularity and extendibil- ity, NPU100 processing modules can be rearranged within each configuration. The NPU100 design is divided in different levels of hierarchy, which can be replicated or arranged as required. Multiple pipeline stages were grouped into entities called mini pipelines, which were then placed in parallel to one another. A mini pipeline includes the processing mod- ules required to complete the quintessential network processing tasks, like address look ups, packet header field extractions and insertions, and logical and arithmetic operations. In order to provide necessary degrees of flexibility in case of changing network protocols or networking applications, the NPU100 pipeline modules are utilizing so called Weakly Programmable Processing Engines (WPPEs). A WPPE is less flexible than, for example, a RISC core. It has a reduced networking-specific instruction set (a total of 8 microcom- mands per module) which is, however, sufficiently flexible to adapt its functionality to layer 2.5 networking requirements. WPPEs are programmed in a high level, C-like, programming language. A compiler is provided to translate the high level code into the microcommand machine language and to assign the respective microcommands to the appropriate process- ing modules. WPPE microprograms can be modified during runtime via a PCI Express interface without disrupting NPU100 processing. This easy-to-use programming scheme is one of the core contributions of this work and allows the user to easily leverage the NPU100's processing resources without requiring in-depth architectural knowledge or the writing of low-level code. The folded, parallel arrangement of NPU100 minipipelines offers significant improve- ment in power consumption in comparison to a conventional stretched pipelined architec- ture where all minipipelines are arranged in series. This is due to a combination of two factors: In case of multiple, stacked MPLS labels or generally, tunneled packet headers, a minipipeline takes care for the processing of a single label or header. The number of parallel minipipelines determines the maximum depth of packet header stacking. In case of packets where fewer (than the maximum number of) headers have to be processed, packet headers can skip the unused minipipelines which can then be dynamically clock or power gated. Second, only one label or header must be transported through each minipipeline. v This leads to a narrower data path in comparison to the stretched pipeline where all headers traverse all pipeline stages. XPower analysis revealed that the reduction in data path width resulted in a power consumption improvement of 25%, while the reduction of traversed pipeline stages can bring this figure up to 78% in the most favorable traffic scenario. The implemented FPGA prototype runs at a system clock of 66 MHz, accepting a new 192 bit packet header with every clock cycle and obtaining a throughput of up to 46 Gbps. Synthesis runs with a TSMC 65nm CMOS technology library showed that an ASIC realization of the design exceeded the 100 Gbps requirements. Zusammenfassung Kernnetzwerke der n¨achsten Generation werden das von Paketverarbeitungsmodulen verlangte Leistungsniveau auf mindestens einen 100 Gbit/s Durchsatz pro Port erh¨ohen. Aktuelle Netzwerkprozessoranbieter bew¨altigen den h¨oheren Durchsatzbedarf durch eine Steigerung der Anzahl der Bearbeitungsmodule in deren Entwurfe.¨ Der NPU100 ist ein neuartiger Netzwerkprozessor (NP), der danach strebt, den Paketverarbeitungbedarf der 100 Gbit/s Carrier Grade Ehernet (CGE) Netzwerke zu erfullen.¨ Um das zu erzielen, eine mehr-schichtige, tief gepipelined und programmierbare Architektur wurde eingefuhrt¨ und als FPGA Prototyp implementiert. Die NPU100 Architektur kann leicht erweitert und an die Anforderungen der verschiedenen Schicht 2.5 Protokolle angepasst werden. Konventionelle programmierbare Ans¨atze werden entweder auf eine massive Anzahl von CPUs oder auf mehreren NPs die einen niedrigen Durchsatz erreichen, basiert. Diese Ans¨atze skalieren sich nicht von einer Kosten-, Leistungsf¨ahigkeit- (Paket Durchsatz) sowie auch Verlustleistungsgesichtspunkt. Der Entwurf und die Entwicklung einer Architektur fur¨ einen integrierten, programmierbaren Baustein, der alle diese Anforderungen gleichzeitig anspricht ist eine wissenschaftliche und technische Herausforderung. Die NPU100 Architektur wird auf einer Pipeline Ansatz basiert, um deren inh¨arent vor- hersehbares Verhalten und skalierbare Leistungsf¨ahigkeit auszunutzen. Um die Ziele der Modularit¨at und der Erweiterbarkeit zu erreichen, k¨onnen NPU100 Module beliebig umge- ordnet werden. Der NPU100 Entwurf wird in verschiedene Hierarchieebenen unterteilt, die abh¨angig von den jeweiligen Anforderungen redupliziert oder umgestaltet werden k¨onnen. Mehrere Pipeline Stufen werden in Einheiten, die Minipipelines getauft wurden, groupiert. Die werden dann parallel zu einander geordnet. Eine Minipipeline beinhaltet alle Module, die fur¨ die Verarbeitung der wesentlichen Netzwerkaufgaben erforderlich sind, wie zum Bei- spiel Address Look Ups, Paketheader Feldextrahierungen und logische und arithmetische Operationen. Um die von verschiedenen Netzwerkprotokole beziehungsweise Netzwerkanwendungen ben¨otigte Funktionalit¨at zu erzielen, wurden beim Entwurf der NPU100 Pipelinemodulen so genannte Weakly Programmable Processing Engines (WPPEs) eingesetzt. Eine WPPE ist nicht so flexibel wie, zum Beispiel, ein RISC Prozessorkern. Die verfugt¨ uber¨ einen einge- schr¨ankten netzwerk-spezifischen Befehlsatz (insgesamt 8 Microcommands pro Modul), der sich aber