United States Patent [191 [11] Patent Number: 5,410,727 Jaffe Et Al

Total Page:16

File Type:pdf, Size:1020Kb

United States Patent [191 [11] Patent Number: 5,410,727 Jaffe Et Al . , ~ US005410727A United States Patent [191 [11] Patent Number: 5,410,727 Jaffe et al. [45] Date of Patent: Apr. 25, 1995 [54] INPUT/OUTPUT SYSTEM FOR A 4,773,038 9/1988 Hillisetal. ........................ .. 395/500 MASSIVELY PARALLEL, SINGLE 4,783,738 11/1988 Li et al. 395/800 INSTRUCTION MULTIPLE D AT A (SIMD) 4,901,224 2/ 1990 Ewert ............. .. 364/200 5,081,575 1/1992 Hilleret a1. .... .. 395/325 SIMULTANEOUS TRANSFER OF DATA 5,136,717 8/1992 Morley et al. 395/800 BETWEEN A HOST COMPUTER 5,148,547 9/1992 Kahle et al. ....................... .. 395/800 INPUT/ OUTPUT SYSTEM AND ALL SIMD MEMORY DEVICES FOREIGN PATENT DOCUMENTS [75] Inventors: Robert S. Jaffe, Shenorock, N.Y.; 2160685 12/1985 United Kingdom . Hungwen Li, Monte Sereno, Calif; Margaret M. L. Kienzle, Somers, OTHER PUBLICATIONS N.Y.; Ming-Cheng Sheng, D. Parkinson et al., “The AMT DAP 500,” Compcon _ Kaoshiung, Taiwan, Prov. of China 88-Thirty IEEE Computer Society International Con [73] Assignee: International Business Machines ference, San Francisco, Calif, Spring 1988, IEEE, New Corporation, Armonk, NY. York, pp. 196-199. Appl. No.: 157,232 Wiackless; Massively parallel computer for Digital Sig [21] nal and Image Processing; May 1989 IEEE. [22] Filed: Nov. 22, 1993 Roberts; Recent developments in parallel processing; Related US. Application Data IEEE. Primary Examiner-Alyssa H. Bowler [63] Continuation of Ser. No. 426,140, Oct. 24, 1989, aban doned. Assistant Examiner-L. Donoghue Attorney, Agent, or Firm—Scully, Scott, Murphy & [5 1] Int. Cl.6 ...................... .. G06F 5/06; G06F 13/12; G06F 15/16; G06F 7/00 Presser - [52] US. Cl. .................................. .. 395/800; 395/250; [57] ABSTRACT 395/275; 395/425; 364/DIG. 1; 364/231.9 A two-dimensional input/ output system for a massively [58] Field of Search . ............ .. 395/800, 375, 275, 425, parallel SIMD computer system'providing an interface 395/200 for the two-way transfer of data between a host com [56] References Cited puter and the SIMD computer. A plurality of buffers U.S. PATENT DOCUMENTS equal in number, and distributed with the individual processing elements of the SIMD computer are used to 3,287,703 11/1966 Slotnick ........................... .. 395/ 800 provide a temporary storage area which allows data in 3,936,806 2/ 1976 Batcher ..... .. .. 340/172 5 different formats to be mapped in a format suitable for 3,979,728 9/1976 Reddauay ......... .. 340/ 172.5 transfer to the host computer or for transfer to the 4,065,808 12/ 1977 Schomberg et al. 395/325 SIMD processing elements. The temporary storage is 4,101,960 7/1978 Stokes et al. .. 395/800 controlled in such a way as to transfer entire blocks of 4,380,046 4/1983 Prosch et al. 395/800 data in a single SIMD system clock cycle thereby 4,380,046 4/ 1983 Forsl et al. .. 395/800 achieving an input/ output data rate of N bits/cycle for 4,481,580 1l/1984 Martin et al. 395/325 a SIMD computer consisting of N processors. The sys 4,484,262 11/1984 Sullivan et al. 395/425 tem is capable of handling irregular as well as regular 4,514,807 4/ 1985 Nogi ....................... .. 364/200 data structures. The system also emphasizes a distrib 4,523,273 6/1985 Adams, 111 et al. 395/800 uted approach in having the input/output system di 4,546,433 10/1985 Tucker . .. 364/200 vided into N pieces and distributed to each processor to 4,601,055 7/ 1986 Kent . .. 382/49 4,665,556 5/1987 Fukushima et al. ..... .. 382/41 reduce the wiring complexity while maintaining the 4,707,781 11/1987 Sullivan et a1. .. 395/425 I/O rate. 4,709,327 11/1987 Hillis et al 395/375 4,727,474 2/ 1988 Batcher ......................... .. 395/800 45 Claims, 6 Drawing Sheets 31 no 511 \ 101m. aurrza “0:55 ‘GU __ nsconzn ADMDATATYFE _ 140 SD") IEIDRY US. Patent Apr. 25, 1995 Sheet 1 of 6 5,410,727 s\:2 322:2aWm .0__..._ is;o: 22::2:35 :2.?A‘ US. Patent Apr. 25, 1995 Sheet 2 of 6 5,410,727 c20:2552e5253:2 .0m_N 2m}2: Gm,on :2 _||..||_.llllll.|....... :522is \ i US. Patent Apr. 25, 1995 Sheet 3 of 6 5,410,727 So: US. Patent Apr. 25, 1995 Sheet 4 of 6 5,410,727 2“ $585,553;. ~4\ /2ma?9mm:2:SE222:: =__:5:\=2 \2.o: em/ . Pg“LLfE5:2.22 i m:52z.52 mz z: v A 14in)?52.522 ‘ _2? / 532% 1.:o2E //3 Go: 52:28 02 US. Patent Apr. 25, 1995 Sheet 6 of 6 5,410,727 2:»225:, 23mm 5,410,727 1 2 subsystem 30, typically comprises a staging memory INPUT/ OUTPUT SYSTEM FOR A MASSIVELY that is responsible for transferring data between the PARALLEL, SINGLE INSTRUCTION, MULTIPLE SIMD computer 10 and the host 20. DATA (SIMD) COMPUTER PROVIDING FOR THE In ?ne-grained, massively parallel SIMD systems, SIlVIULTANEOUS TRANSFER OF DATA 5 one single instruction after another is broadcast simulta BETWEEN A HOST COlVIPUTER INPUT/ OUTPUT neously to the processor array, with each instruction SYSTEM AND ALL SHVID MEMORY DEVICES being applied to different pieces of data. Traditionally, ?ne grained SIMD parallel systems This is a continuation of application Ser. No. 426,140 devoted their application emphasis to image-oriented filed on Oct. 24, 1989, now abandoned. 10 computing which resulted in the input/output system BACKGROUND OF THE INVENTION being designed only to handle regularly structured two dimensional data such as image or matrix data. The 1. Field of the Invention input/output rate of a SIMD computer system was This invention relates to an input/output system for typically low due to the fact that for a N-processor SIMD parallel computers, and more particularly, to a SIMD system, arranged as a VNXVNmesh, only VN distributed input/output system using a temporary stor items of data are input or output to or from the system age buffer, individual for each processing element of the per machine cycle. Most ?ne grained SIMD parallel SIMD computer, capable of providing a two-dimen sional data transfer scheme that substantially increases systems are connected by mesh networks and their input/output is done by shifting data between a host and the I/O rate of the SIMD system. 20 2. Discussion of the Prior Art one boundary row/column of the SIMD system. This Scientists and engineers from all disciplines have type of data transfer is considered one dimensional. In become dependent upon computers to further their addition, data must be pre-arranged by the host such work, and with this dependancy they have grown to that a particular datum can be assigned to a desired processor. The low input/output rate and restricted expect the performance of these computers to increase 25 by an order of magnitude approximately every ?ve capability in handling only regular data structures effec years. This trend of increasing computer performance tively con?ne SIMD computers to a narrow application in the order of magnitude range is slowing, in fact, the domain. supercomputers presently available may already be A second disadvantage of the mesh oriented row/ within an order of magnitude of their technological column shifting scheme used in the prior art SIMD limit. Heretofore, the limit was approximately 3 giga input/output systems is the difficulty in programming. ?ops which corresponds to approximately 3 billion Since the input/output function is overlapped with the ?oating point instructions per second, which is a func current task execution, the programmer must interleave tion of the length of time it takes electrical signals to . the instructions for computing with the instructions for propagate through various wires and interconnections input/ output. This situation may lead to a very unread at approximately one half the speed of light. The draw able code as well as force the programming to stay at back of the prior art system is that many of the problems the assembly language level. facing todays scientists and engineers can only be A third’ aspect of the prior art input/output subsys solved utilizing computers with performance capabili tems presently employed by SIMD computers is the ties far exceeding the 3 giga?op limit. handling of the corner turning function. The corner Recent advances in supercomputer performance turning function is a phenomenon due to the different have been achieved by dividing applications among arrangement of data at the host and SIMD systems. For many processors working in parallel. Theoretically, example, N 32-bit words are arranged in the host as N parallel processing computers should provide perfor consecutive words, each being 32-bits wide. However, mance in the tera?op range. While these computers 45 in transfer, these data words are distributed among 32 provide increased capacity and speed, they also provide planes of SIMD memory with each plane containing N a new set of problems, namely, programming the new bits, each of which is associated with one processor. computers, handling the input/output operations and This situation arises due to the fact that in the SIMD manipulating the data. The programming dif?culties system, all processors need to access the same memory stem from the fact that no matter how well a program is location in the same machine cycle and the plane orga written, it is extremely hard to achieve 100 percent nization supports such memory accessing. The corner utilization of multiple processors. The problem of han turning of regular data structures such as image or ma dling input/ output (I/O) operations and data manipula trix is supported by mesh-oriented row/column shift tion arises because of the sheer volume of data associ ing.
Recommended publications
  • Intel ® Atom™ Processor E6xx Series SKU for Different Segments” on Page 30 Updated Table 15
    Intel® Atom™ Processor E6xx Series Datasheet July 2011 Revision 004US Document Number: 324208-004US INFORMATIONLegal Lines and Disclaimers IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL® PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL’S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. UNLESS OTHERWISE AGREED IN WRITING BY INTEL, THE INTEL PRODUCTS ARE NOT DESIGNED NOR INTENDED FOR ANY APPLICATION IN WHICH THE FAILURE OF THE INTEL PRODUCT COULD CREATE A SITUATION WHERE PERSONAL INJURY OR DEATH MAY OCCUR. Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked “reserved” or “undefined.” Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information. The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order.
    [Show full text]
  • Adding Support for Vector Instructions to 8051 Architecture
    International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 05 Issue: 10 | Oct 2018 www.irjet.net p-ISSN: 2395-0072 Adding Support for Vector Instructions to 8051 Architecture Pulkit Gairola1, Akhil Alluri2, Rohan Verma3, Dr. Rajeev Kumar Singh4 1,2,3Student, Dept. of Computer Science, Shiv Nadar University, Uttar, Pradesh, India, 4Assistant Dean & Professor, Dept. of Computer Science, Shiv Nadar University, Uttar, Pradesh, India, ----------------------------------------------------------------------***--------------------------------------------------------------------- Abstract - Majority of the IoT (Internet of Things) devices are Some of the features that have made the 8051 popular are: meant to just collect data and sent it to the cloud for processing. They can be provided with such vectorization • 4 KB on chip program memory. capabilities to carry out very specific computation work and • 128 bytes on chip data memory (RAM) thus reducing latency of output. This project is used to demonstrate how to add specialized 1.2 Components of 8051[2] vectorization capabilities to architectures found in micro- controllers. • 4 register banks. • 128 user defined software flags. The datapath of the 8051 is simple enough to be pliable • 8-bit data bus for adding an experimental Vectorization module. We are • 16-bit address bus trying to make changes to an existing scalar processor so • 16 bit timers (usually 2, but may have more, or less). that it use a single instruction to operate on one- dimensional • 3 internal and 2 external interrupts. arrays of data called vectors. The significant reduction in the • Bit as well as byte addressable RAM area of 16 bytes. Instruction fetch overhead for vectorizable operations is useful • Four 8-bit ports, (short models have two 8-bit ports).
    [Show full text]
  • Datasheet, Volume 2
    Intel® Core™ i7-900 Desktop Processor Extreme Edition Series and Intel® Core™ i7-900 Desktop Processor Series on 32-nm Process Datasheet, Volume 2 July 2010 Reference Number: 323253-002 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL® PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. Intel products are not intended for use in medical, life saving, or life sustaining applications. Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked “reserved” or “undefined.” Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The Intel® Core™ i7-900 desktop processor Extreme Edition series and Intel® Core™ i7-900 desktop processor series on 32-nm process may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Intel processor numbers are not a measure of performance. Processor numbers differentiate features within each processor family, not across different processor families. See http://www.intel.com/products/processor_number for details.
    [Show full text]
  • MPC500 Family MPC509 User's Manual
    MPC509UM/AD MPC500 Family MPC509 User’s Manual Paragraph TABLE OF CONTENTS Page Number Number PREFACE Section 1 INTRODUCTION 1.1 Features. 1-1 1.2 Block Diagram . 1-2 1.3 Pin Connections. 1-3 1.4 Memory Map . 1-5 Section 2 SIGNAL DESCRIPTIONS 2.1 Pin List . 2-1 2.2 Pin Characteristics . 2-2 2.3 Power Connections . 2-3 2.4 Pins with Internal Pull-Ups and Pulldowns. 2-3 2.5 Signal Descriptions . 2-4 2.5.1 Bus Arbitration and Reservation Support Signals . 2-6 2.5.1.1 Bus Request (BR). 2-6 2.5.1.2 Bus Grant (BG). 2-7 2.5.1.3 Bus Busy (BB) . 2-8 2.5.1.4 Cancel Reservation (CR) . 2-8 2.5.2 Address Phase Signals . 2-8 2.5.2.1 Address Bus (ADDR[0:29]). 2-9 2.5.2.2 Write/Read (WR) . 2-9 2.5.2.3 Burst Indicator (BURST). 2-9 2.5.2.4 Byte Enables (BE[0:3]) . 2-10 2.5.2.5 Transfer Start (TS) . 2-10 2.5.2.6 Address Acknowledge (AACK). 2-10 2.5.2.7 Burst Inhibit (BI) . 2-11 2.5.2.8 Address Retry (ARETRY). 2-12 2.5.2.9 Address Type (AT[0:1]) . 2-12 2.5.2.10 Cycle Types (CT[0:3]). 2-13 2.5.3 Data Phase Signals . 2-13 2.5.3.1 Data Bus (DATA[0:31]). 2-13 2.5.3.2 Burst Data in Progress (BDIP) .
    [Show full text]
  • Introduction to CUDA Programming
    Προηγμένη Αρχιτεκτονική Υπολογιστών Non-Uniform Cache Architectures Νεκτάριος Κοζύρης & Διονύσης Πνευματικάτος {nkoziris,pnevmati}@cslab.ece.ntua.gr Διαφάνειες από τον Ανδρέα Μόσχοβο, University of Toronto 8ο εξάμηνο ΣΗΜΜΥ ⎯ Ακαδημαϊκό Έτος: 2019-20 http://www.cslab.ece.ntua.gr/courses/advcomparch/ Modern Processors Have Lots of Cores and Large Caches • Sun Niagara T1 From http://jinsatoh.jp/ennui/archives/2006/03/opensparc.html Modern Processors Have Lots of Cores and Large Caches • Intel i7 (Nehalem) From http://www.legitreviews.com/article/824/1/ Modern Processors Have Lots of Cores and Large Caches • AMD Shanghai From http://www.chiparchitect.com Modern Processors Have Lots of Cores and Large Caches • IBM Power 5 From http://www.theinquirer.net/inquirer/news/1018130/ibms-power5-the-multi-chipped-monster-mcm-revealed Why? • Helps with Performance and Energy • Find graph with perfect vs. realistic memory system What Cache Design Used to be About Core L1I L1D 1-3 cycles / Latency Limited L2 10-16 cycles / Capacity Limited Main Memory > 200 cycles • L2: Worst Latency == Best Latency • Key Decision: What to keep in each cache level What Has Changed ISSCC 2003 What Has Changed • Where something is matters • More time for longer distances NUCA: Non-Uniform Cache Architecture Core L1I L1D • Tiled Cache • Variable Latency L2 L2 L2 L2 • Closer tiles = Faster L2 L2 L2 L2 L2 L2 L2 L2 • Key Decisions: – Not only what to cache L2 L2 L2 L2 – Also where to cache NUCA Overview • Initial Research focused on Uniprocessors • Data Migration Policies –
    [Show full text]
  • Address Decoding Large-Size Binary Decoder: 28-To-268435456 Binary Decoder for 256Mb Memory
    Embedded System 2010 SpringSemester Seoul NationalUniversity Application [email protected] Dept. ofEECS/CSE ower Naehyuck Chang P 4190.303C ow- L Introduction to microprocessor interface mbedded aboratory E L L 1 P L E Harvard Architecture Microprocessor Instruction memory Input: address from PC ARM Cortex M3 architecture Output: instruction (read only) Data memory Input: memory address Addressing mode Input/output: read/write data Read or write operand Embedded Low-Power 2 ELPL Laboratory Memory Interface Interface Address bus Data bus Control signals (synchronous and asynchronous) Fully static read operation Input Memory Output Access control Embedded Low-Power 3 ELPL Laboratory Memory Interface Memory device Collection of memory cells: 1M cells, 1G cells, etc. Memory cells preserve stored data Volatile and non-volatile Dynamic and static How access memory? Addressing Input Normally address of the cell (cf. content addressable memory) Memory Random, sequential, page, etc. Output Exclusive cell access One by one (cf. multi-port memory) Operations Read, write, refresh, etc. RD, WR, CS, OE, etc. Access control Embedded Low-Power 4 ELPL Laboratory Memory inside SRAM structure Embedded Low-Power 5 ELPL Laboratory Memory inside Ports Recall D-FF 1 input port and one output port for one cell Ports of memory devices Large number of cells One write port for consistency More than one output ports allow simultaneous accesses of multiple cells for read Register file usually has multiple read ports such as 1W 3R Memory devices usually has one read
    [Show full text]
  • Covert and Side Channels Due to Processor Architecture*
    Covert and Side Channels due to Processor Architecture* Zhenghong Wang and Ruby B. Lee Department of Electrical Engineering, Princeton University {zhenghon,rblee}@princeton.edu Abstract analysis [2-5] and timing analysis [6-10]. Different amounts of power (or time) used by the device in Information leakage through covert channels and performing an encryption can be measured and side channels is becoming a serious problem, analyzed to deduce some or all of the key bits. The especially when these are enhanced by modern number of trials needed in a power or timing side processor architecture features. We show how channel attack could be much less than that needed in processor architecture features such as simultaneous mathematical cryptanalysis. multithreading, control speculation and shared caches In this paper, we consider software side channel can inadvertently accelerate such covert channels or attacks. In these attacks, a victim process inadvertently enable new covert channels and side channels. We first assumes the role of the sending process, and a listening illustrate the reality and severity of this problem by (attacker) process assumes the role of the receiving describing concrete attacks. We identify two new process. If the victim process is performing an covert channels. We show orders of magnitude encryption using a secret key, a software side channel increases in covert channel capacities. We then attack allows the listening process to get information present two solutions, Selective Partitioning and the that leads to partial or full recovery of the key. The novel Random Permutation Cache (RPCache). The main contributions of this paper are: RPCache can thwart most cache-based software side • Identification of two new covert channels due to channel attacks, with minimal hardware costs and processor architecture features, like simultaneous negligible performance impact.
    [Show full text]
  • Special Address Generation Arrangement
    Patentamt 0 034 1 80 ® êJEuropâischesy))} European Patent Office ® Publication number: Office européen des brevets B1 ® EUROPEAN PATENT SPECIFICATION (§) Dateof publication of patent spécification: 31.10.84 ® Int. Cl.3: G 06 F 9/32 ® Application number: 80901823.7 (22) Date offiling: 11.08.80 (88) International application number: PCT/US80/01017 (87) International publication number: WO 81/00633 05.03.81 Gazette 81/06 @ SPECIAL ADDRESS GENERATION ARRANGEMENT. (§) Priority: 31.08.79 US 71717 (73) Proprietor: Western Electric Company, Incorporated 222 Broadway (43) Date of publication of application: New York, NY 10038 (US) 26.08.81 Bulletin 81/34 (72) Inventor: HUANG, Victor Kuo-Liang (45) Publication of the grant of the patent: 2246 Jersey Avenue 31.10.84 Bulletin 84/44 Scotch Plains, NJ 07090 (US) © Designated Contracting States: (74) Représentative: Weitzel, David Stanley et al DE FR GB NL Western Electric Company Limited 5, Mornington Road Woodford Green Essex IG8 0TU (GB) Références cited: FR-A-1 564 476 US-A-3 297 998 Références cited: US-A-3 343134 IBM TECHNICAL DISCLOSURE BULLETIN, vol. US-A-3 394 350 17, no. 7, december 1974 New York (US) T.G. CÛ US-A-3 533 076 ARTHUR et ai.: "Direct access for text US-A-3 593 313 memory move", pages 1852-1853 o US-A-3 739 345 IBM TECHNICAL DISCLOSURE BULLETIN, vol. US-A-4 00 065 810 19, no. 1,june1976 New York (US) T.J. US-A-4 080 650 DVORAK et al.: "Hardware assist for microcode "An Introduction to IVlicrocomputers Volume I", exécution of storage-to-storage move issued 1976 A.
    [Show full text]
  • Low Overhead Memory Subsystem Design for a Multicore Parallel DSP Processor
    Linköping Studies in Science and Technology Dissertation No. 1532 Low Overhead Memory Subsystem Design for a Multicore Parallel DSP Processor Jian Wang Department of Electrical Engineering Linköping University SE-581 83 Linköping, Sweden Linköping 2014 ISBN 978-91-7519-556-8 ISSN 0345-7524 ii Low Overhead Memory Subsystem Design for a Multicore Parallel DSP Processor Jian Wang ISBN 978-91-7519-556-8 Copyright ⃝c Jian Wang, 2014 Linköping Studies in Science and Technology Dissertation No. 1532 ISSN 0345-7524 Department of Electrical Engineering Linköping University SE-581 83 Linköping Sweden Phone: +46 13 28 10 00 Author e-mail: [email protected] Cover image Combined Star and Ring onchip interconnection of the ePUMA multicore DSP. Parts of this thesis are reprinted with permission from the IEEE. Printed by UniTryck, Linköping University Linköping, Sweden, 2014 Abstract The physical scaling following Moore’s law is saturated while the re- quirement on computing keeps growing. The gain from improving sili- con technology is only the shrinking of the silicon area, and the speed- power scaling has almost stopped in the last two years. It calls for new parallel computing architectures and new parallel programming meth- ods. Traditional ASIC (Application Specific Integrated Circuits) hardware has been used for acceleration of Digital Signal Processing (DSP) subsys- tems on SoC (System-on-Chip). Embedded systems become more com- plicated, and more functions, more applications, and more features must be integrated in one ASIC chip to follow up the market requirements. At the same time, the product lifetime of a SoC with ASIC has been much reduced because of the dynamic market.
    [Show full text]
  • The Memory System
    The Memory System Fundamental Concepts Some basic concepts Maximum size of the Main Memory byte-addressable CPU-Main Memory Connection Memory Processor k-bit address bus MAR n-bit data bus Up to 2k addressable MDR locations Word length = n bits Control lines ( R / W , MFC, etc.) Some basic concepts(Contd.,) Measures for the speed of a memory: . memory access time. memory cycle time. An important design issue is to provide a computer system with as large and fast a memory as possible, within a given cost target. Several techniques to increase the effective size and speed of the memory: . Cache memory (to increase the effective speed). Virtual memory (to increase the effective size). The Memory System Semiconductor RAM memories Internal organization of memory chips Each memory cell can hold one bit of information. Memory cells are organized in the form of an array. One row is one memory word. All cells of a row are connected to a common line, known as the “word line”. Word line is connected to the address decoder. Sense/write circuits are connected to the data input/output lines of the memory chip. Internal organization of memory chips (Contd.,) 7 7 1 1 0 0 W0 • • • FF FF A 0 W • • 1 • A 1 Address • • • • • • Memory decoder cells A • • • • • • 2 • • • • • • A 3 W • • 15 • Sense / Write Sense / Write Sense / Write R / W circuit circuit circuit CS Data input /output lines: b7 b1 b0 SRAM Cell Two transistor inverters are cross connected to implement a basic flip-flop. The cell is connected to one word line and two bits lines by transistors T1 and T2 When word line is at ground level, the transistors are turned off and the latch retains its state Read operation: In order to read state of SRAM cell, the word line is activated to close switches T1 and T2.
    [Show full text]
  • The CPU and Memory
    The CPU and Memory How does a computer work? How does a computer interact with data? How are instructions performed? Recall schematic diagram: ITEC 1000 Introduction to Information Technologies 1 Sunday, November 1, 2009 Registers A register is a permanent storage location within the CPU. Registers may contain: • a memory address for communication with memory • an input or output address • data is stored for an arithmetic or logic operation • an instruction in process of execution • codes for special purposes - e.g. keeping track of status • conditions for conditional branch instructions The Little Man Computer had a single register - the accumulator ITEC 1000 Introduction to Information Technologies 2 Sunday, November 1, 2009 Register Characteristics: • directly wired within CPU • not addressed as memory locations for which access time is slow • manipulated by CPU during execution • may be of different sizes depending on function Register names and functions: • program counter register (PC) - holds address of current instruction • instruction register (IR) - holds actual instruction being executed together with parameters • memory address register (MAR) - holds address of a memory location • memory data register (MDR) - holds data being stored or retrieved from memory location addressed by MAR • status registers - indicate such as: arithmetic/logic conditions, memory overflow, power failure, internal error, etc. ITEC 1000 Introduction to Information Technologies 3 Sunday, November 1, 2009 Memory- operation - capacity - implementations
    [Show full text]
  • Intel SGX Explained
    Intel SGX Explained Victor Costan and Srinivas Devadas [email protected], [email protected] Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology ABSTRACT Data Owner’s Remote Computer Computer Intel’s Software Guard Extensions (SGX) is a set of Untrusted Software extensions to the Intel architecture that aims to pro- vide integrity and confidentiality guarantees to security- Computation Container Dispatcher sensitive computation performed on a computer where Setup Computation Setup all the privileged software (kernel, hypervisor, etc) is Private Code Receive potentially malicious. Verification Encrypted This paper analyzes Intel SGX, based on the 3 pa- Results Private Data pers [14, 79, 139] that introduced it, on the Intel Software Developer’s Manual [101] (which supersedes the SGX Owns Manages manuals [95, 99]), on an ISCA 2015 tutorial [103], and Trusts Authors on two patents [110, 138]. We use the papers, reference Trusts manuals, and tutorial as primary data sources, and only draw on the patents to fill in missing information. Data Owner Software Infrastructure This paper does not reflect the information available Provider Owner in two papers [74, 109] that were published after the first Figure 1: Secure remote computation. A user relies on a remote version of this paper. computer, owned by an untrusted party, to perform some computation This paper’s contributions are a summary of the on her data. The user has some assurance of the computation’s Intel-specific architectural and micro-architectural details integrity and confidentiality. needed to understand SGX, a detailed and structured pre- sentation of the publicly available information on SGX, uploads the desired computation and data into the secure a series of intelligent guesses about some important but container.
    [Show full text]