Nagarajanr54359.Pdf (2.985Mb)
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Performance and Energy Efficient Network-On-Chip Architectures
Linköping Studies in Science and Technology Dissertation No. 1130 Performance and Energy Efficient Network-on-Chip Architectures Sriram R. Vangal Electronic Devices Department of Electrical Engineering Linköping University, SE-581 83 Linköping, Sweden Linköping 2007 ISBN 978-91-85895-91-5 ISSN 0345-7524 ii Performance and Energy Efficient Network-on-Chip Architectures Sriram R. Vangal ISBN 978-91-85895-91-5 Copyright Sriram. R. Vangal, 2007 Linköping Studies in Science and Technology Dissertation No. 1130 ISSN 0345-7524 Electronic Devices Department of Electrical Engineering Linköping University, SE-581 83 Linköping, Sweden Linköping 2007 Author email: [email protected] Cover Image A chip microphotograph of the industry’s first programmable 80-tile teraFLOPS processor, which is implemented in a 65-nm eight-metal CMOS technology. Printed by LiU-Tryck, Linköping University Linköping, Sweden, 2007 Abstract The scaling of MOS transistors into the nanometer regime opens the possibility for creating large Network-on-Chip (NoC) architectures containing hundreds of integrated processing elements with on-chip communication. NoC architectures, with structured on-chip networks are emerging as a scalable and modular solution to global communications within large systems-on-chip. NoCs mitigate the emerging wire-delay problem and addresses the need for substantial interconnect bandwidth by replacing today’s shared buses with packet-switched router networks. With on-chip communication consuming a significant portion of the chip power and area budgets, there is a compelling need for compact, low power routers. While applications dictate the choice of the compute core, the advent of multimedia applications, such as three-dimensional (3D) graphics and signal processing, places stronger demands for self-contained, low-latency floating-point processors with increased throughput. -
REPORT Compaq Chooses SMT for Alpha Simultaneous Multithreading
VOLUME 13, NUMBER 16 DECEMBER 6, 1999 MICROPROCESSOR REPORT THE INSIDERS’ GUIDE TO MICROPROCESSOR HARDWARE Compaq Chooses SMT for Alpha Simultaneous Multithreading Exploits Instruction- and Thread-Level Parallelism by Keith Diefendorff Given a full complement of on-chip memory, increas- ing the clock frequency will increase the performance of the As it climbs rapidly past the 100-million- core. One way to increase frequency is to deepen the pipeline. transistor-per-chip mark, the micro- But with pipelines already reaching upwards of 12–14 stages, processor industry is struggling with the mounting inefficiencies may close this avenue, limiting future question of how to get proportionally more performance out frequency improvements to those that can be attained from of these new transistors. Speaking at the recent Microproces- semiconductor-circuit speedup. Unfortunately this speedup, sor Forum, Joel Emer, a Principal Member of the Technical roughly 20% per year, is well below that required to attain the Staff in Compaq’s Alpha Development Group, described his historical 60% per year performance increase. To prevent company’s approach: simultaneous multithreading, or SMT. bursting this bubble, the only real alternative left is to exploit Emer’s interest in SMT was inspired by the work of more and more parallelism. Dean Tullsen, who described the technique in 1995 while at Indeed, the pursuit of parallelism occupies the energy the University of Washington. Since that time, Emer has of many processor architects today. There are basically two been studying SMT along with other researchers at Washing- theories: one is that instruction-level parallelism (ILP) is ton. Once convinced of its value, he began evangelizing SMT abundant and remains a viable resource waiting to be tapped; within Compaq. -
Computer Architecture: Dataflow (Part I)
Computer Architecture: Dataflow (Part I) Prof. Onur Mutlu Carnegie Mellon University A Note on This Lecture n These slides are from 18-742 Fall 2012, Parallel Computer Architecture, Lecture 22: Dataflow I n Video: n http://www.youtube.com/watch? v=D2uue7izU2c&list=PL5PHm2jkkXmh4cDkC3s1VBB7- njlgiG5d&index=19 2 Some Required Dataflow Readings n Dataflow at the ISA level q Dennis and Misunas, “A Preliminary Architecture for a Basic Data Flow Processor,” ISCA 1974. q Arvind and Nikhil, “Executing a Program on the MIT Tagged- Token Dataflow Architecture,” IEEE TC 1990. n Restricted Dataflow q Patt et al., “HPS, a new microarchitecture: rationale and introduction,” MICRO 1985. q Patt et al., “Critical issues regarding HPS, a high performance microarchitecture,” MICRO 1985. 3 Other Related Recommended Readings n Dataflow n Gurd et al., “The Manchester prototype dataflow computer,” CACM 1985. n Lee and Hurson, “Dataflow Architectures and Multithreading,” IEEE Computer 1994. n Restricted Dataflow q Sankaralingam et al., “Exploiting ILP, TLP and DLP with the Polymorphous TRIPS Architecture,” ISCA 2003. q Burger et al., “Scaling to the End of Silicon with EDGE Architectures,” IEEE Computer 2004. 4 Today n Start Dataflow 5 Data Flow Readings: Data Flow (I) n Dennis and Misunas, “A Preliminary Architecture for a Basic Data Flow Processor,” ISCA 1974. n Treleaven et al., “Data-Driven and Demand-Driven Computer Architecture,” ACM Computing Surveys 1982. n Veen, “Dataflow Machine Architecture,” ACM Computing Surveys 1986. n Gurd et al., “The Manchester prototype dataflow computer,” CACM 1985. n Arvind and Nikhil, “Executing a Program on the MIT Tagged-Token Dataflow Architecture,” IEEE TC 1990. -
Prozessorarchitektur Am Beispiel Des Amdathlon
PROZESSORARCHITEKTUR AM BEISPIEL DES AMD ATHLON AUSGEARBEITET VON ALEXANDER TABAKOFF Betreuender Lehrer: Prof. Wolfgang Schinwald VERÖFFENTLICHT AM 26.2.2001 PROZESSORARCHITEKTUR INHALTSVERZEICHNIS: 1 Historische / allgemeine Einführung 1.1Die Anwendungsbereiche von Prozessoren 1.2Der erste Prozessor 1.3Die Entwicklung bis zum 586 1.4Der AMD Athlon und der Pentium III - Entwicklungsgeschichte 2 Grundlegende Dinge zur Prozessorarchitektur und dem Bau von Prozessoren 2.1Physikalisch 2.1.1Der Aufbau eines Transistors 2.1.2Die Auswirkungen in die Praxis 2.2Logisch 2.3Die Herstellung von Prozessoren und ihre Grenzen 2.4Der Von-Neumann-Rechner 3 Die Prozessorarchitektur des AMD Athlon im Vergleich zu seinen Konkurrenten 3.1Das Design des AMD Athlon 3.2Das Bussytem des AMD Athlon 3.3Die Cachearchitektur des AMD Athlon 3.4Vor- und Nachteile gegenüber anderen Designs 3.5Interview mit Jan Gütter, Public Relations Sprecher von AMD 4 Anhang 4.1Der Grund dieser Arbeit 4.2Glossar 4.3Literaturverzeichnis 4.4Begleitprotokoll 4.5Bildnachweis Inhaltsverzeichnis: - Seite 2 PROZESSORARCHITEKTUR 1 HISTORISCHE / ALLGEMEINE EINFÜHRUNG 1.1Die Anwendungsbereiche von Prozessoren Prozessoren haben heute verschiedenste Anwendungsbereiche. Sie werden in Autos, Set Top Boxen, Spielekonsolen, Handys, Taschenrechnern, PCs usw. verwendet. Dabei macht der Marktanteil der PC Prozessoren nur rund 2%1 aus. Trotz dieser vergleichsweise geringen Produktion genießen PC Prozessoren einen bedeutend höheren Bekanntheitsgrad. Fast jeder kennt PC Prozessoren wie den Intel Pentium -
Configurable Fine-Grain Protection for Multicore Processor Virtualization 1
Configurable Fine-Grain Protection for Multicore Processor Virtualization David Wentzlaff1, Christopher J. Jackson2, Patrick Griffin3, and Anant Agarwal2 [email protected], [email protected], griffi[email protected], [email protected] 1Princeton University 2Tilera Corp. 3Google Inc. Abstract TLB Access DMA Engine “User” Network I/O Network 2 2 2 2 1 3 1 3 1 3 1 3 Multicore architectures, with their abundant on-chip re- 0 0 0 0 sources, are effectively collections of systems-on-a-chip. The protection system for these architectures must support Key: 0 – User Code, 1 – OS, 2 – Hypervisor, 3 – Hypervisor Debugger multiple concurrently executing operating systems (OSes) Figure 1. With CFP, system software can dy• with different needs, and manage and protect the hard- namically set the privilege level needed to ac• ware’s novel communication mechanisms and hardware cess each fine•grain processor resource. features. Traditional protection systems are insufficient; they protect supervisor from user code, but typically do not protect one system from another, and only support fixed as- ticore systems, a protection system must both temporally signment of resources to protection levels. In this paper, protect and spatially isolate access to resources. Spatial iso- we propose an alternative to traditional protection systems lation is the need to isolate different system software stacks which we call configurable fine-grain protection (CFP). concurrently executing on spatially disparate cores in a mul- CFP enables the dynamic assignment of in-core resources ticore system. Spatial isolation is especially important now to protection levels. We investigate how CFP enables differ- that multicore systems have directly accessible networks ent system software stacks to utilize the same configurable connecting cores to other cores and cores to I/O devices. -
A Bibliography of Publications in IEEE Micro
A Bibliography of Publications in IEEE Micro Nelson H. F. Beebe University of Utah Department of Mathematics, 110 LCB 155 S 1400 E RM 233 Salt Lake City, UT 84112-0090 USA Tel: +1 801 581 5254 FAX: +1 801 581 4148 E-mail: [email protected], [email protected], [email protected] (Internet) WWW URL: http://www.math.utah.edu/~beebe/ 16 September 2021 Version 2.108 Title word cross-reference -Core [MAT+18]. -Cubes [YW94]. -D [ASX19, BWMS19, DDG+19, Joh19c, PZB+19, ZSS+19]. -nm [ABG+16, KBN16, TKI+14]. #1 [Kah93i]. 0.18-Micron [HBd+99]. 0.9-micron + [Ano02d]. 000-fps [KII09]. 000-Processor $1 [Ano17-58, Ano17-59]. 12 [MAT 18]. 16 + + [ABG+16]. 2 [DTH+95]. 21=2 [Ste00a]. 28 [BSP 17]. 024-Core [JJK 11]. [KBN16]. 3 [ASX19, Alt14e, Ano96o, + AOYS95, BWMS19, CMAS11, DDG+19, 1 [Ano98s, BH15, Bre10, PFC 02a, Ste02a, + + Ste14a]. 1-GHz [Ano98s]. 1-terabits DFG 13, Joh19c, LXB07, LX10, MKT 13, + MAS+07, PMM15, PZB+19, SYW+14, [MIM 97]. 10 [Loc03]. 10-Gigabit SCSR93, VPV12, WLF+08, ZSS+19]. 60 [Gad07, HcF04]. 100 [TKI+14]. < [BMM15]. > [BMM15]. 2 [Kir84a, Pat84, PSW91, YSMH91, ZACM14]. [WHCK18]. 3 [KBW95]. II [BAH+05]. ∆ 100-Mops [PSW91]. 1000 [ES84]. 11- + [Lyl04]. 11/780 [Abr83]. 115 [JBF94]. [MKG 20]. k [Eng00j]. µ + [AT93, Dia95c, TS95]. N [YW94]. x 11FO4 [ASD 05]. 12 [And82a]. [DTB01, Dur96, SS05]. 12-DSP [Dur96]. 1284 [Dia94b]. 1284-1994 [Dia94b]. 13 * [CCD+82]. [KW02]. 1394 [SB00]. 1394-1955 [Dia96d]. 1 2 14 [WD03]. 15 [FD04]. 15-Billion-Dollar [KR19a]. -
Optimizing SIMD Execution in HW/SW Co-Designed Processors
Optimizing SIMD Execution in HW/SW Co-designed Processors Rakesh Kumar Department of Computer Architecture Universitat Politècnica de Catalunya Advisors: Alejandro Martínez Intel Barcelona Research Center Antonio González Intel Barcelona Research Center Universitat Politècnica de Catalunya A thesis submitted in fulfillment of the requirements for the degree of Doctor of Philosophy / Doctor per la UPC ABSTRACT SIMD accelerators are ubiquitous in microprocessors from different computing domains. Their high compute power and hardware simplicity improve overall performance in an energy efficient manner. Moreover, their replicated functional units and simple control mechanism make them amenable to scaling to higher vector lengths. However, code generation for these accelerators has been a challenge from the days of their inception. Compilers generate vector code conservatively to ensure correctness. As a result they lose significant vectorization opportunities and fail to extract maximum benefits out of SIMD accelerators. This thesis proposes to vectorize the program binary at runtime in a speculative manner, in addition to the compile time static vectorization. There are different environments that support runtime profiling and optimization support required for dynamic vectorization, one of most prominent ones being: 1) Dynamic Binary Translators and Optimizers (DBTO) and 2) Hardware/Software (HW/SW) Co-designed Processors. HW/SW co-designed environment provides several advantages over DBTOs like transparent incorporations of new hardware features, binary compatibility, etc. Therefore, we use HW/SW co-designed environment to assess the potential of speculative dynamic vectorization. Furthermore, we analyze vector code generation for wider vector units and find out that even though SIMD accelerators are amenable to scaling from hardware point of view, vector code generation at higher vector length is even more challenging. -
UNIVERSITY of CALIFORNIA, SAN DIEGO Holistic Design for Multi-Core Architectures a Dissertation Submitted in Partial Satisfactio
UNIVERSITY OF CALIFORNIA, SAN DIEGO Holistic Design for Multi-core Architectures A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Computer Science (Computer Engineering) by Rakesh Kumar Committee in charge: Professor Dean Tullsen, Chair Professor Brad Calder Professor Fred Chong Professor Rajesh Gupta Dr. Norman P. Jouppi Professor Andrew Kahng 2006 Copyright Rakesh Kumar, 2006 All rights reserved. The dissertation of Rakesh Kumar is approved, and it is acceptable in quality and form for publication on micro- film: Chair University of California, San Diego 2006 iii DEDICATIONS This dissertation is dedicated to friends, family, labmates, and mentors { the ones who taught me, indulged me, loved me, challenged me, and laughed with me, while I was also busy working on my thesis. To Professor Dean Tullsen for teaching me the values of humility, kind- ness,and caring while trying to teach me football and computer architecture. For always encouraging me to do the right thing. For always letting me be myself. For always believing in me. For always challenging me to dream big. For all his wisdom. And for being an adviser in the truest sense, and more. To Professor Brad Calder. For always caring about me. For being an inspiration. For his trust. For making me believe in myself. For his lies about me getting better at system administration and foosball even though I never did. To Dr Partha Ranganathan. For always being there for me when I would get down on myself. And that happened often. For the long discussions on life, work, and happiness. -
CG-Ooo Energy-Efficient Coarse-Grain Out-Of-Order Execution
CG-OoO Energy-Efficient Coarse-Grain Out-of-Order Execution Milad Mohammadi⋆, Tor M. Aamodt†, William J. Dally⋆‡ ⋆Stanford University, †University of British Columbia, ‡NVIDIA Research [email protected], [email protected], [email protected] ABSTRACT CG-OoO model to make it even more energy efficient. We introduce the Coarse-Grain Out-of-Order (CG- Despite the significant achievements in improving en- OoO) general purpose processor designed to achieve ergy and performance properties of the OoO proces- close to In-Order processor energy while maintaining sor in the recent years [2], studies show the energy Out-of-Order (OoO) performance. CG-OoO is an and performance attributes of the OoO execution model energy-performance proportional general purpose remain superlinearly proportional [3, 4]. Studies indi- architecture that scales according to the program cate control speculation and dynamic scheduling tech- load1. Block-level code processing is at the heart of nique amount to 88% and 10% of the OoO superior the this architecture; CG-OoO speculates, fetches, performance compared to the In-Order (InO) proces- schedules, and commits code at block-level granu- sor [5]. Scheduling and speculation in OoO is performed larity. It eliminates unnecessary accesses to energy at instruction granularity regardless of the instruction consuming tables, and turns large tables into smaller type even though they are mainly effective during un- and distributed tables that are cheaper to access. predictable dynamic events (e.g. unpredictable cache CG-OoO leverages compiler-level code optimizations misses) [5]. Furthermore, our studies show speculation to deliver efficient static code, and exploits dynamic and dynamic scheduling amount to 67% and 51% of instruction-level parallelism and block-level parallelism. -
Sony's Emotionally Charged Chip
VOLUME 13, NUMBER 5 APRIL 19, 1999 MICROPROCESSOR REPORT THE INSIDERS’ GUIDE TO MICROPROCESSOR HARDWARE Sony’s Emotionally Charged Chip Killer Floating-Point “Emotion Engine” To Power PlayStation 2000 by Keith Diefendorff rate of two million units per month, making it the most suc- cessful single product (in units) Sony has ever built. While Intel and the PC industry stumble around in Although SCE has cornered more than 60% of the search of some need for the processing power they already $6 billion game-console market, it was beginning to feel the have, Sony has been busy trying to figure out how to get more heat from Sega’s Dreamcast (see MPR 6/1/98, p. 8), which has of it—lots more. The company has apparently succeeded: at sold over a million units since its debut last November. With the recent International Solid-State Circuits Conference (see a 200-MHz Hitachi SH-4 and NEC’s PowerVR graphics chip, MPR 4/19/99, p. 20), Sony Computer Entertainment (SCE) Dreamcast delivers 3 to 10 times as many 3D polygons as and Toshiba described a multimedia processor that will be the PlayStation’s 34-MHz MIPS processor (see MPR 7/11/94, heart of the next-generation PlayStation, which—lacking an p. 9). To maintain king-of-the-mountain status, SCE had to official name—we refer to as PlayStation 2000, or PSX2. do something spectacular. And it has: the PSX2 will deliver Called the Emotion Engine (EE), the new chip upsets more than 10 times the polygon throughput of Dreamcast, the traditional notion of a game processor. -
Parallel Computer Architecture III
Parallel Computer Architecture III Stefan Lang Interdisciplinary Center for Scientific Computing (IWR) University of Heidelberg INF 368, Room 532 D-69120 Heidelberg phone: 06221/54-8264 email: [email protected] WS 14/15 Stefan Lang (IWR) Simulation on High-Performance Computers WS 14/15 1 / 51 Parallel Computer Architecture III Parallelism and Granularity Graphic cards I/O Detailed study Hypertransport Protocol Stefan Lang (IWR) Simulation on High-Performance Computers WS 14/15 2 / 51 Parallelism and Granularity level five jobs or Programs coarse grained Subprograms, modules level four or classes middle grained Increase of Increase of Parallelism Communications Procedures, functions level three Requirements or methods Non−recursive loops level two or iterators fine grained Instructions level one or statements Stefan Lang (IWR) Simulation on High-Performance Computers WS 14/15 3 / 51 Graphics Cards GPU = Graphics Processing Unit CUDA = Compute Unified Device Architecture ◮ Toolkit by NVIDIA for direct GPU Programming ◮ Programming of a GPU without graphical API ◮ GPGPU compared to CPUs strongly increased computing performance and storage bandwidth GPUs are cheap and broadly established Stefan Lang (IWR) Simulation on High-Performance Computers WS 14/15 4 / 51 Computing Performance: CPU vs. GPU Stefan Lang (IWR) Simulation on High-Performance Computers WS 14/15 5 / 51 Graphics Cards: Hardware Specification Stefan Lang (IWR) Simulation on High-Performance Computers WS 14/15 6 / 51 Chip Architecture: CPU vs. GPU Stefan Lang (IWR) Simulation on High-Performance Computers WS 14/15 7 / 51 Graphics Cards: Hardware Design Stefan Lang (IWR) Simulation on High-Performance Computers WS 14/15 8 / 51 Graphics Cards: Memory Design 8192 registers (32-bit), in total 32KB per multiprocessor 16KB of fast shared memory per multiprocessor Large global memory (hundreds of MBs, e.g. -
Commemorative Booklet for the Thirty-Fifth Asilomar Microcomputer Workshop April 15-17, 2009
35 Commemorative Booklet for the Thirty-Fifth Asilomar Microcomputer Workshop April 15-17, 2009 Programs from the 1975-2009 Workshops This file available at www.amw.org AMW: 3dh Workshop Prologue - Ted Laliotis The Asilomar Microcomputer Workshop (AMW) has played a very important role during its 30 years ofexistence. Perhaps, that is why it continues to be well attended. The workshop was founded in 1975 as an IEEE technical workshop sponsored by the Western Area Committee ofthe IEEE Computer Society. The intentional lack of written proceedings and the exclusion of general press representatives was perhaps the most distinctive characteristic of AMW that made it so special and successful. This encouraged the scientists and engineers who were at the cutting edge ofthe technology, the movers and shakers that shaped Silicon Valley, the designers of the next generation microprocessors, to discuss and debate freely the various issues facing microprocessors. In fact, many features, or lack of, were born during the discussions and debates at AMW. We often referred to AMW and its attendees as the bowels of Silicon Valley, even though attendees came from all over the country, and the world. Another characteristic that made AMW special was the "required" participation and contribution by all attendees. Every applicant to attend AMW had to convince the committee that he had something to contribute by speaking during one of the sessions or during the open mike session. In the event that someone slipped through and was there only to listen, that person was not invited back the following year. The decades ofthe 70's and 80's were probably the defining decades for the amazing explosion of microcomputers.