General-Purpose Systolic Arrays

Total Page:16

File Type:pdf, Size:1020Kb

General-Purpose Systolic Arrays General-Purpose Systolic Arrays Kurtis T. Johnson and A.R. Hurson, Pennsylvania State University Behrooz Shirazi, University of Texas, Arlington hen Sun Microsystems introduced its first workstation, the company could not have imagined how quickly workstations would revolution- ize computing. The idea of a community of engineers, scientists, or researchers time-sharing on a single mainframe computer could hardly have become ancient any more quickly. The almost instant wide acceptance of worksta- tions and desktop computers indicates that [hey were quickly recognized as giving the best and most flexible performance for the dollar. Desktop computers proliferated for three reasons. First, very large scale integra- tion (VLSI) and wafer scale integration (WSI), despite some problems, increased the gate density of chips while dramatically lowering their production cost.' Moreover. increased gate density permits a more complicated processor, which in turn promotes parallelism. Second, desktop computers distribute processing power to the user in an easily customized open architecture. Rcal-time applications that require intensive 110 Systolic arrays and computation need not consume all the resources of a supercomputer. Also, effectively exploit desktop computers support high-definition screens with color and motion far exceeding those available with any multiple-user. shared-resource mainframe. massive parallelism in Third, economical. high-bandwidth networks allow desktop computers to share data. thus retaining the most appealing aspect of centralized computing, resource computationally sharing. Moreover, networks allow the computers to share data with dissimilar intensive applications. computing machines. That is perhaps the most important reason for the accep- tance of desktop computers. since all the performance in the world is worth little With advances in VLSI, if the machine is isolated. WSI, and FPGA Today's workstations have redefined the way the computing community distrib- utes processing resources, and tomorrow's machines will continue this trend with technologies, they have higher bandwidth networks and higher computational performance. One way to obtain higher computational performance is to use special parallel coprocessors to progressed from fixed- perform functions such as motion and color support of high-definition screens. function to general- Future computationally intensive applications suited for desktop computing ma- chines include real-time text, speech, and image processing. These applications purpose architectures. require massive parallelism.' I Many computational tasks are by their than processor arrays that execute sys- very nature sequential: for other tasks I I I tolic algorithms. Systolic arrays consist the degree of parallelism varies. There- of elements that take one of the follow- fore. a massively parallel computation- ing forms: al architecture must maintain sufficient application flexibility and computational a special-purpose cell with hardwired efficiency. It must be’ functions, a vector-computerlike cell with an reconfigurable to exploit applica- instruction decoding unit and a pro- tion-dependent parallelisms, cessing unit, or high-level-language programmable a processor complete with a control for task control and flexibility, I I I unit and a processing unit. scalable for easy extension to many applications. and Figure 1. General systolic organization. In all cases, the systolic elements or capable of supportingsingle-instruc- cells are customized for intensive local tion stream, multiple-data stream communications and decentralized par- (SIMD) organizations for vector trols blood flow to the cells since it is the allelism. Because an array consists of operations and multiple-instruction source and destination for all blood.J cells of only one or, at most, a few kinds, stream, multiple-data stream Although there is no widely accepted it has regular and simple characteris- (MIMD) organizations to exploit standard definition of systolic arrays tics. The array usually is extensible with nonhomogeneous parallelism re- and systolic cells. the following descrip- minimal difficulty. quirements. tion serves as a working definition in Three factors have contributed to the this article (also see “Systolic array systolic array’s evolution into a leading Systolic arrays are ideally qualified summary” below). Systolic arrays have approach for handling computationally for computationally intensive applica- balanced. uniform, gridlike architectures intensive applications: technology ad- tions. Whether functioningas a dedicat- (Figure I) in which each line indicates a vances, concurrency processing, and ed fixed-function graphics processor or communication path and each intersec- demanding scientific applicationsP a more complicated and flexible copro- tion represents a cell or a systolic ele- cessor shared across a network, asystol- ment. However, systolic arrays are more Technology advances. Advances in ic array effectively exploits massive par- allelism. Falling into an area between vector computers and massively paral- lel computers. systolic arrays typically Systolic array summary combine intensive local communication and computation with decentralized Systolic array: A gridlike structure of special processing elements that parallelism in a compact package. They processes data much like an n-dimensional pipeline. Unlike a pipeline, how- capitalize on regular, modular, rhyth- ever, the input data as well as partial results flow through the array. In addi- mic, synchronous. concurrent process- tion, data can flow in a systolic organization at multiple speeds in multiple di- es that require intensive, repetitive com- rections. Systolic arrays usually have a very high rate of I/O and are well putation. While systolic arrays originally suited for intensive parallel operations. were used for fixed or special-purpose architectures, the systolic concept has Applications: Matrix arithmetic, signal processing, image processing, lan- be en extende d to genera1 -purpose guage recognition, relational database operations, data structure manipula- SIMD and MIMD architectures. tion, and character string manipulation. Special-purpose systolic array: An array of hardwired systolic process- Why systolic arrays? ing elements tailored for a specific application. Typically, many tens or hun- dreds of cells fit on a single chip. Ever since Kung proposed the systol- General-purpose systolic array: An array of systolic processing ele- ic model..‘ its elegant solutions to de- ments that can be adapted to a variety of applications via programming or manding problems and its potential per- reconfiguration. formance have attracted great attention. In physiology, the term sysfolicdescribes Programmable systolic array: An array of programmable systolic ele- the contraction (systole) of the heart, ments that operates either in SIMD or MIMD fashion. Either the arrays inter- which regularly sends blood to all cells connect or each processing unit is programmable and a program controls of the body through the arteries, veins, dataflow through the elements. and capillaries. Analogously. systolic computer processes perform operations Reconfigurable systolic array: An array of systolic elements that can be in a rhythmic. incremental, cellular. and programmed at the lowest level. FPGA (field-programmablegate array) tech- repetitive manner. The systolic compu- nology allows the array to emulate hardwired systolic elements at a very low tational rate is restricted by the array’s level for each unique application. U0 operations. much as the heart con- November lYY3 21 VLSI/WSI technology complement the Demanding scientific applications. Cellgranularity.The level of cell gran- systolic array’s qualifications in the fol- The technology growth of the last three ularity directly affects the array’s lowing ways: decades has produced computing envi- throughput and flexibility and deter- ronments that make it feasible to attack mines the set of algorithms that it can Smaller and faster gatesallow a high- demanding scientific applications on a efficiently execute. Each cell’s basic er rate of on-chip communication be- larger scale. Large-matrix multiplica- operation can range from a logical or cause data has a shorter distance to tion, feature extraction, cluster analy- bitwise operation. to a word-level mul- travel. sis, and radar signal processing are only tiplication or addition. to a complete Higher gate densities permit more afew examples.’As recent history shows, program. Granularity is subject to tech- complicated cells with higher individu- when many computer users work on a nology capabilities and limitations as al and group performance. Granularity wide variety of applications, they devel- well as design goals. For example. inte- increases as word length increases, and op new applications requiringincreased gration-substrate families have differ- concurrency increases with more com- computational performance. Examples ent performance and density character- plicated cells. of these innovative applications include istics. Packaging also introduces U0 pin Economical design and fabrication interactive language recognition, rela- restrictions. processes produce less expensive sys- tional database operations, text recog- tolic chips, even in small quantities. nition, and virtual reality.’ These appli- Extensibility. Because systolic arrays Better design tools allow arrays to be cations require massive repetitive and are built of cellular
Recommended publications
  • System Trends and Their Impact on Future Microprocessor Design
    IBM Research System Trends and their Impact on Future Microprocessor Design Tilak Agerwala Vice President, Systems IBM Research System Trends and their Impact | MICRO 35 | Tilak Agerwala © 2002 IBM Corporation IBM Research Agenda System and application trends Impact on architecture and microarchitecture The Memory Wall Cellular architectures and IBM's Blue Gene Summary System Trends and their Impact | MICRO 35 | Tilak Agerwala © 2002 IBM Corporation IBM Research TRENDSTRENDS Microprocessors < 10 GHz in systems 64-256 Way SMP 65-45nm, Copper, Highest performance SOI Best MP Scalability 1-2 GHz Leading edge process technology 4-8 Way SMP RAS, virtualization ~100nm technology 10+ of GHz 4-8 Way SMP SMP/Large 65-45nm, Copper, SOI Systems Highest Frequency Cost and power Low GHz sensitive Leading edge process Uniprocessor technology Desktop ~100nm technology 2-4 GHz, Uniproc, Component-based and Game ~100nm, Copper, SOI Consoles Lowest Power / Lowest Multi MHz cost designs SoC capable Embedded Uniprocessor ASIC / Foundry ~100-200nm technologies Systems technology System Trends and their Impact | MICRO 35 | Tilak Agerwala © 2002 IBM Corporation IBM Research TRENDSTRENDS Large system application trends Traditional commercial applications Databases, transaction processing, business apps like payroll etc. The internet has driven the growth of new commercial applications New life sciences applications are commercial and high-growth Drug discovery and genetic engineering research needs huge amounts of compute power (e.g. protein folding simulations)
    [Show full text]
  • An Overview of the Blue Gene/L System Software Organization
    An Overview of the Blue Gene/L System Software Organization George Almasi´ , Ralph Bellofatto , Jose´ Brunheroto , Calin˘ Cas¸caval , Jose´ G. ¡ Castanos˜ , Luis Ceze , Paul Crumley , C. Christopher Erway , Joseph Gagliano , Derek Lieber , Xavier Martorell , Jose´ E. Moreira , Alda Sanomiya , and Karin ¡ Strauss ¢ IBM Thomas J. Watson Research Center Yorktown Heights, NY 10598-0218 £ gheorghe,ralphbel,brunhe,cascaval,castanos,pgc,erway, jgaglia,lieber,xavim,jmoreira,sanomiya ¤ @us.ibm.com ¥ Department of Computer Science University of Illinois at Urbana-Champaign Urabana, IL 61801 £ luisceze,kstrauss ¤ @uiuc.edu Abstract. The Blue Gene/L supercomputer will use system-on-a-chip integra- tion and a highly scalable cellular architecture. With 65,536 compute nodes, Blue Gene/L represents a new level of complexity for parallel system software, with specific challenges in the areas of scalability, maintenance and usability. In this paper we present our vision of a software architecture that faces up to these challenges, and the simulation framework that we have used for our experiments. 1 Introduction In November 2001 IBM announced a partnership with Lawrence Livermore National Laboratory to build the Blue Gene/L (BG/L) supercomputer, a 65,536-node machine de- signed around embedded PowerPC processors. Through the use of system-on-a-chip in- tegration [10], coupled with a highly scalable cellular architecture, Blue Gene/L will de- liver 180 or 360 Teraflops of peak computing power, depending on the utilization mode. Blue Gene/L represents a new level of scalability for parallel systems. Whereas existing large scale systems range in size from hundreds (ASCI White [2], Earth Simulator [4]) to a few thousands (Cplant [3], ASCI Red [1]) of compute nodes, Blue Gene/L makes a jump of almost two orders of magnitude.
    [Show full text]
  • Cellular Wave Computers and CNN Technology – a Soc Architecture with Xk Processors and Sensor Arrays*
    Cellular Wave Computers and CNN Technology – a SoC architecture with xK processors and sensor arrays* Tamás ROSKA1 Fellow IEEE 1. Introduction and the main theses 1.1 Scenario: Architectural lessons from the trends in manufacturing billion component devices when crossing the threshold of 100 nm feature size Preliminary proposition: The nature of fabrication technology, the nature and type of data to be processed, and the nature and type of events to be detected or „computed” will determine the architecture, the elementary instructions, and the type of algorithms needed, hence also the complexity of the solution. In view of this proposition, let us list a few key features of the electronic technology of Today and its consequences.. (i) Convergence of CMOS, NANO and OPTICAL technologies towards a cellular architecture with short and sparse wires CMOS chips: * Processors: K or M transistors on an M or G transistor die =>K processors /chip * Wires: at 180 nm or below, gate delay is smaller than wire delay NANO processors and sensors: * Mainly 2D organization of cells integrating processing and sensing * Interactions mainly with the neighbours OPTICAL devices: * parallel processing * optical correlators * VCSELs and programable SLMs, Hence the architecture should be characterized by * 2 D layers (or a layered 3D) * Cellular architecture with * mainly local and /or regular sparse wireing leading via Î a Cellular Nonlinear Network (CNN) Dynamics 1 The Faculty of Information Technology and the Jedlik Laboratories of the Pázmány University, Budapest and the Computer and Automation Institute of the Hungarian Academy of Sciences, Budapest, Hungary ([email protected], www.itk.ppke.hu) * Research supported by the Office of Naval Research, Human Frontiers of Science Program, EU Future and Emergent Technologies Program, the Hungarian Academy of Sciences, and the Jedlik Laboratories of the Pázmány University, Budapest 0-7803-9254-X/05/$20.00 ©2005 IEEE.
    [Show full text]
  • Performance Modelling and Optimization of Memory Access on Cellular Computer Architecture Cyclops64
    Performance modelling and optimization of memory access on cellular computer architecture Cyclops64 Yanwei Niu, Ziang Hu, Kenneth Barner and Guang R. Gao Department of ECE, University of Delaware, Newark, DE, 19711, USA {niu, hu, barner, ggao}@ee.udel.edu Abstract. This paper focuses on the Cyclops64 computer architecture and presents an analytical model and performance simulation results for the preloading and loop unrolling approaches to optimize the performance of SVD (Singular Value Decomposition) benchmark. A performance model for dissecting the total execu- tion cycles is presented. The data preloading using “memcpy” or hand optimized “inline” assembly code, and the loop unrolling approach are implemented and compared with each other in terms of the total number of memory access cycles. The key idea is to preload data from offchip to onchip memory and store the data back after the computation. These approaches can reduce the total memory access cycles and can thus improve the benchmark performance significantly. 1 Introduction The design concept of computer architecture over the last two decades has been mainly on the exploitation of the instruction level parallelism, such as pipelining,VLIW or superscalar architecture. For the next generation of computer architecture, hardware threading multiprocessor is becoming more and more popular. One approach of hard- ware multithreading is called CMP (Chip MultiProcessor) approach, which proposes a single chip design that uses a collection of independent processors with less resource sharing. An example of CMP architecture design is Cyclops64 [1–5], a new architec- ture for high performance parallel computers being developed at the IBM T. J. Watson Research Center and University of Delaware.
    [Show full text]
  • ECE 590: Digital Systems Design Using Hardware Description Language (VHDL) Systolic Implementation of Faddeev's Algorithm in V
    Project Report ECE 590: Digital Systems Design using Hardware Description Language (VHDL) Systolic Implementation of Faddeev’s Algorithm in VHDL. Final Project Tejas Tapsale. PSU ID: 973524088. Project Report Introduction: = In this project we are implementing Nash’s systolic implementation and Chuang an He,s systolic implementation for Faddeev’s algorithm. These two implementations have their own advantages and drawbacks. Here in this project report we first see detail of Nash implementation and then we will go for Chaung and He’s implementation. The organization of this report is like this:- 1. First we take detail idea about what is systolic architecture and how it can be used for matrix multiplication and its advantages and disadvantages. 2. Then we discuss about Gaussian Elimination for matrix computation and its properties. 3. Then we will see Faddeev’s algorithm and how it is used. 4. Systolic arrays for MATRIX TRIANGULARIZATION 5. We will discuss Nash implementation in detail and its VHDL coding. 6. Advantages and disadvantage of Nash systolic implementation. 7. Chaung and He’s implementation in detail and its VHDL coding. 8. Difficulties chased in this project. 9. Conclusion. 10. VHDL code for Nash Implementation. 11. VHDL code for Chaung and He’s Implementation. 12. Simulation Results. 13. References. 14. PowerPoint Presentation Project Report 1: Systolic Architecture: = A systolic array is composed of matrix-like rows of data processing units called cells. Data processing units (DPU) are similar to central processing units (CPU)s, (except for the usual lack of a program counter, since operation is transport-triggered, i.e., by the arrival of a data object).
    [Show full text]
  • On the Efficiency of Register File Versus Broadcast Interconnect For
    On the Efficiency of Register File versus Broadcast Interconnect for Collective Communications in Data-Parallel Hardware Accelerators Ardavan Pedram, Andreas Gerstlauer Robert A. van de Geijn Department of Electrical and Computer Engineering Department of Computer Science The University of Texas at Austin The University of Texas at Austin fardavan,[email protected] [email protected] Abstract—Reducing power consumption and increasing effi- on broadcast communication among a 2D array of PEs. In ciency is a key concern for many applications. How to design this paper, we focus on the LAC’s data-parallel broadcast highly efficient computing elements while maintaining enough interconnect and on showing how representative collective flexibility within a domain of applications is a fundamental question. In this paper, we present how broadcast buses can communication operations can be efficiently mapped onto eliminate the use of power hungry multi-ported register files this architecture. Such collective communications are a core in the context of data-parallel hardware accelerators for linear component of many matrix or other data-intensive operations algebra operations. We demonstrate an algorithm/architecture that often demand matrix manipulations. co-design for the mapping of different collective communication We compare our design with typical SIMD cores with operations, which are crucial for achieving performance and efficiency in most linear algebra routines, such as GEMM, equivalent data parallelism and with L1 and L2 caches that SYRK and matrix transposition. We compare a broadcast bus amount to an equivalent aggregate storage space. To do so, based architecture with conventional SIMD, 2D-SIMD and flat we examine efficiency and performance of the cores for data register file for these operations in terms of area and energy movement and data manipulation in both GEneral matrix- efficiency.
    [Show full text]
  • Focal-Plane Analog VLSI Cellular Implementation of the Boundary Contour System
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 46, NO. 2, FEBRUARY 1999 327 [6] K. F. Hui and B. E. Shi, “Robustness of CNN implementations for Gabor-type image filtering,” in Proc. Asia Pacific Conf. Circuits Systems, Seoul, South Korea, Nov. 1996, pp. 105–108. [7] C. C. Enz, F. Krummenacher, and E. A. Vittoz, “An analytical MOS transistor model valid in all regions of operation and dedicated to low-voltage and low-current applications,” Anal. Integr. Circuits Signal Processing, vol. 8, no. 1, pp. 83–114, July 1995. [8] B. E. Shi and K. F. Hui, “An analog VLSI neural network for phase based machine vision,” in Advances in Neural Information Process- ing Systems 10, M. I. Jordan, M. J. Kearns, and S. A. Solla, Eds. Cambridge, MA: MIT, 1998, pp. 726–732. [9] C. A. Mead and T. Delbruck, “Scanners for visualizing activity of analog VLSI circuitry,” Anal. Integr. Circuits Signal Processing, vol. 1, no. 2, pp. 93–106, Oct. 1991. (a) Focal-Plane Analog VLSI Cellular Implementation of the Boundary Contour System Gert Cauwenberghs and James Waskiewicz Abstract—We present an analog very large scale integration (VLSI) cellular architecture implementing a version of the boundary contour system (BCS) for real-time focal-plane image processing. Inspired by neuromorphic models across the retina and several layers of visual cortex, the design integrates in each pixel the functions of phototransduction and simple cells, complex cells, hypercomplex cells, and bipole cells in each of three directions interconnected on a hexagonal grid. Analog current- mode complementary metal–oxide–semiconductor (CMOS) circuits are used throughout to perform edge detection, local inhibition, directionally (b) selective long-range diffusive kernels, and renormalizing global gain control.
    [Show full text]
  • Computer Architectures
    Parallel (High-Performance) Computer Architectures Tarek El-Ghazawi Department of Electrical and Computer Engineering The George Washington University Tarek El-Ghazawi, Introduction to High-Performance Computing slide 1 Introduction to Parallel Computing Systems Outline Definitions and Conceptual Classifications » Parallel Processing, MPP’s, and Related Terms » Flynn’s Classification of Computer Architectures Operational Models for Parallel Computers Interconnection Networks MPP’s Performance Tarek El-Ghazawi, Introduction to High-Performance Computing slide 2 Definitions and Conceptual Classification What is Parallel Processing? - A form of data processing which emphasizes the exploration and exploitation of inherent parallelism in the underlying problem. Other related terms » Massively Parallel Processors » Heterogeneous Processing – In the1990s, heterogeneous workstations from different processor vendors – Now, accelerators such as GPUs, FPGAs, Intel’s Xeon Phi, … » Grid computing » Cloud Computing Tarek El-Ghazawi, Introduction to High-Performance Computing slide 3 Definitions and Conceptual Classification Why Massively Parallel Processors (MPPs)? » Increase processing speed and memory allowing studies of problems with higher resolutions or bigger sizes » Provide a low cost alternative to using expensive processor and memory technologies (as in traditional vector machines) Tarek El-Ghazawi, Introduction to High-Performance Computing slide 4 Stored Program Computer The IAS machine was the first electronic computer developed, under
    [Show full text]
  • Systolic Computing on Gpus for Productive Performance
    Systolic Computing on GPUs for Productive Performance Hongbo Rong Xiaochen Hao Yun Liang Intel Intel, Peking University Peking University [email protected] [email protected] [email protected] Lidong Xu Hong H Jiang Pradeep Dubey Intel Intel Intel [email protected] [email protected] [email protected] Abstract an SIMT (single-instruction multiple-threads) programming We propose a language and compiler to productively build interface, and rely on an underlying compiler to transparently high-performance software systolic arrays that run on GPUs. map a wrap of threads to SIMD execution units. If data need to Based on a rigorous mathematical foundation (uniform re- be exchanged among threads in the same wrap, programmers currence equations and space-time transform), our language have to write explicit shuffle instructions [19]. has a high abstraction level and covers a wide range of ap- This paper proposes a new programming style that pro- plications. A programmer specifies a projection of a dataflow grams GPUs as building software systolic arrays. Systolic ar- compute onto a linear systolic array, while leaving the detailed rays have been extensively studied since 1978 [15], and shown implementation of the projection to a compiler; the compiler an abundance of practice-oriented applications, mainly in implements the specified projection and maps the linear sys- fields dominated by iterative procedures [32], e.g. image and tolic array to the SIMD execution units and vector registers signal processing, matrix arithmetic, non-numeric applica- of GPUs. In this way, both productivity and performance are tions, relational database [7, 8, 10, 11, 16, 17, 30], and so on.
    [Show full text]
  • Software-Defined Hyper-Cellular Architecture for Green and Elastic
    1 Software-Defined Hyper-Cellular Architecture for Green and Elastic Wireless Access Sheng Zhou, Tao Zhao, Zhisheng Niu, and Shidong Zhou Abstract To meet the surging demand of increasing mobile Internet traffic from diverse applications while maintaining moderate energy cost, the radio access network (RAN) of cellular systems needs to take a green path into the future, and the key lies in providing elastic service to dynamic traffic demands. To achieve this, it is time to rethink RAN architectures and expect breakthroughs. In this article, we review the state-of-art literature which aims to renovate RANs from the perspectives of control-traffic decoupled air interface, cloud-based RANs, and software-defined RANs. We then propose a software- defined hyper-cellular architecture (SDHCA) that identifies a feasible way of integrating the above three trends to enable green and elastic wireless access. We further present key enabling technologies to realize SDHCA, including separation of the air interface, green base station operations, and base station functions virtualization, followed by our hardware testbed for SDHCA. Besides, we summarize several future research issues worth investigating. I. INTRODUCTION Since their birth, cellular systems have evolved from the first generation analog systems with very low data rate to today’s fourth generation (4G) systems with more than 100 Mbps capacity to end users. However, the radio access network (RAN) architecture has not experienced arXiv:1512.04935v1 [cs.NI] 15 Dec 2015 many changes: base stations (BSs) are generally deployed and operated in a distributed fashion, Sheng Zhou, Zhisheng Niu and Shidong Zhou are with Tsinghua National Laboratory for Information Science and Technology, Dept.
    [Show full text]
  • An Overview on Cyclops-64 Architecture - a Status Report on the Programming Model and Software Infrastructure
    An Overview on Cyclops-64 Architecture - A Status Report on the Programming Model and Software Infrastructure Guang R. Gao Endowed Distinguished Professor Electrical & Computer Engineering University of Delaware [email protected] 2007/6/14 SOS11-06-2007.ppt 1 Outline • Introduction • Multi-Core Chip Technology • IBM Cyclops-64 Architecture/Software • Cyclops-64 Programming Model and System Software • Future Directions • Summary 2007/6/14 SOS11-06-2007.ppt 2 TIPs of compute power operating on Tera-bytes of data Transistor Growth in the near future Source: Keynote talk in CGO & PPoPP 03/14/07 by Jesse Fang from Intel 2007/6/14 SOS11-06-2007.ppt 3 Outline • Introduction • Multi-Core Chip Technology • IBM Cyclops-64 Architecture/Software • Programming/Compiling for Cyclops-64 • Looking Beyond Cyclops-64 • Summary 2007/6/14 SOS11-06-2007.ppt 4 Two Types of Multi-Core Architecture Trends • Type I: Glue “heavy cores” together with minor changes • Type II: Explore the parallel architecture design space and searching for most suitable chip architecture models. 2007/6/14 SOS11-06-2007.ppt 5 Multi-Core Type II • New factors to be considered –Flops are cheap! –Memory per core is small –Cache-coherence is expensive! –On-chip bandwidth can be enormous! –Examples: Cyclops-64, and others 2007/6/14 SOS11-06-2007.ppt 6 Flops are Cheap! An example to illustrate design tradeoffs: • If fed from small, local register files: 64-bit FP – 3200 GB/s, 10 pJ/op unit – < $1/Gflop (60 mW/Gflop) (drawn to a 64-bit FPU is < 1mm^2 scale) and ~= 50pJ • If fed from global on-chip memory: Can fit over 200 on a chip.
    [Show full text]
  • Virtualized Baseband Units Consolidation in Advanced Lte Networks Using Mobility- and Power-Aware Algorithms
    San Jose State University SJSU ScholarWorks Master's Projects Master's Theses and Graduate Research Fall 2017 VIRTUALIZED BASEBAND UNITS CONSOLIDATION IN ADVANCED LTE NETWORKS USING MOBILITY- AND POWER-AWARE ALGORITHMS Uladzimir Karneyenka San Jose State University Follow this and additional works at: https://scholarworks.sjsu.edu/etd_projects Part of the Computer Sciences Commons Recommended Citation Karneyenka, Uladzimir, "VIRTUALIZED BASEBAND UNITS CONSOLIDATION IN ADVANCED LTE NETWORKS USING MOBILITY- AND POWER-AWARE ALGORITHMS" (2017). Master's Projects. 571. DOI: https://doi.org/10.31979/etd.sg3g-dr33 https://scholarworks.sjsu.edu/etd_projects/571 This Master's Project is brought to you for free and open access by the Master's Theses and Graduate Research at SJSU ScholarWorks. It has been accepted for inclusion in Master's Projects by an authorized administrator of SJSU ScholarWorks. For more information, please contact [email protected]. VIRTUALIZED BASEBAND UNITS CONSOLIDATION IN ADVANCED LTE NETWORKS USING MOBILITY - AND POWER -AWARE ALGORITHMS A Writing Project Report Presented to The Faculty of the Department of Computer Science San Jose State University In Partial Fulfillment Of the Requirements for the Degree Master of Science By Uladzimir Karneyenka May 2017 © 2017 Uladzimir Karneyenka ALL RIGHTS RESERVED ABSTRACT Virtualization of baseband units in Advanced Long-Term Evolution networks and a rapid performance growth of general purpose processors naturally raise the interest in resource multiplexing. The concept of resource sharing and management between virtualized instances is not new and extensively used in data centers. We adopt some of the resource management techniques to organize virtualized baseband units on a pool of hosts and investigate the behavior of the system in order to identify features which are particularly relevant to mobile environment.
    [Show full text]