FLEXBUS: a High-Performance System-On-Chip Communication

Total Page:16

File Type:pdf, Size:1020Kb

FLEXBUS: a High-Performance System-On-Chip Communication 34.4 FLEXBUS: A High-Performance System-on-Chip Communication Architecture with a Dynamically Configurable Topology ∗ Krishna Sekar Kanishka Lahiri Anand Raghunathan Sujit Dey Dept. of ECE NEC Laboratories America NEC Laboratories America Dept. of ECE UC San Diego, CA 92093 Princeton, NJ 08540 Princeton, NJ 08540 UC San Diego, CA 92093 [email protected] [email protected] [email protected] [email protected] ABSTRACT portunities for dynamically controlling the spatial allocation of com- munication architecture resources among different SoC components, In this paper, we describe FLEXBUS, a flexible, high-performance on- chip communication architecture featuring a dynamically configurable a capability, which if properly exploited, can yield substantial per- formance gains. We also describe techniques for choosing optimized topology. FLEXBUS is designed to detect run-time variations in com- munication traffic characteristics, and efficiently adapt the topology of FLEXBUS configurations under time-varying traffic profiles. We have the communication architecture, both at the system-level, through dy- conducted detailed experiments on FLEXBUS using a commercial de- namic bridge by-pass, as well as at the component-level, using compo- sign flow to analyze its area and performance under a wide variety of system-level traffic profiles, and applied it to an IEEE 802.11 MAC nent re-mapping. We describe the FLEXBUS architecture in detail and present techniques for its run-time configuration based on the char- processor design. The results demonstrate that FLEXBUS provides up acteristics of the on-chip communication traffic. The techniques un- to 31.5% performance gains compared to conventional architectures, with negligible hardware overhead. derlying FLEXBUS can be used in the context of a variety of on-chip communication architectures. In particular, we demonstrate its appli- 1.1 Related Work cation to AMBA AHB, a popular commercial on-chip bus. Detailed The increasing importance of on-chip communication has resulted experiments conducted on the FLEXBUS architecture using a commer- in numerous advances in communication architecture topology and cial design flow, and its application to an IEEE 802.11 MAC processor protocol design [1]-[5]. However, these architectures are based on design, demonstrate that it can provide significant performance gains fixed topologies, and therefore, are not fully optimized for traffic with as compared to conventional architectures (up to 31.5% in our experi- time-varying characteristics. The use of sophisticated communication ments), with negligible hardware overhead. protocols [3], [6] and techniques for customizing these protocols, both Categories and Subject Descriptors: B.4.3 [Input/Output and Data statically and dynamically [7], [8], [9], have been proposed. Protocol Communications]: Interconnections (Subsystems) and topology customization are complementary, and hence, may be General Terms: Algorithms, Design, Performance combined to yield large performance gains. A number of automatic approaches have been proposed to statically optimize the communica- Keywords: Communication architectures, On-chip bus tion architecture topology [10], [11]. While many of these techniques aim at exploiting application characteristics, they do not adequately 1. INTRODUCTION address dynamic variations in communication traffic profiles. The on-chip communication architecture has emerged as a criti- cal determinant of overall performance in complex System-on-Chip 2. BACKGROUND (SoC) designs, which makes it crucial to customize it to the charac- Communication architecture topologies can range from a single teristics of the traffic generated by the application. Since application shared bus, to which all the system components are connected, to a traffic can exhibit significant diversity, configurable communication network of bus segments interconnected by bridges. Component map- architectures that can be easily adapted to an application’s require- ping refers to the association between components and bus segments. ments are desirable. Components can be either masters (e.g., CPUs, DSPs), which can initi- As shown in this paper, complex SoC designs can benefit from com- ate transactions (reads/writes), or slaves,(e.g., memories, peripherals), munication architectures that can be dynamically configured in order which only respond to transactions initiated by a master. Communica- to adapt to changing traffic characteristics. Most state-of-the-art com- tion protocols specify conventions for the data transfer (e.g., arbitra- munication architectures provide limited customization opportunities tion policies, burst transfer modes). Bridges are specialized compo- through a static (design time) configuration of architectural parame- nents that enable transactions between masters and slaves located on ters, and as such, lack the flexibility to provide high performance in different bus segments. “Cross-bridge” transactions execute as fol- cases where the traffic profiles exhibit significant dynamic variation. lows: the transaction request from the master, once granted by the first In this paper, we describe FLEXBUS, a flexible communication ar- bus, is registered by the bridge’s slave interface. The bridge forwards chitecture that detects run-time variations in system-wide communica- the transaction to it’s master interface, which then requests access to tion traffic characteristics, and efficiently adapts the logical connectiv- the second bus. On being granted, the bridge executes the transaction ity of the communication architecture and the components connected with the destination slave and returns the response to it’s slave inter- to it. This is achieved using two novel techniques, bridge by-pass, face, which finally returns the response to the original master. and component re-mapping, which address configurability at the sys- tem and component levels, respectively. These techniques provide op- 3. MOTIVATIONAL EXAMPLE: IEEE 802.11 ∗ This work was supported by the UC Discovery Grant (grant# com02-10123), the Center MAC PROCESSOR for Wireless Communications (UC San Diego), and NEC Laboratories America. In this section, we use an IEEE 802.11 MAC processor design as an example to illustrate the concepts behind the FLEXBUS architec- Permission to make digital or hard copies of all or part of this work for ture. The functional specification of the IEEE 802.11 MAC proces- personal or classroom use is granted without fee provided that copies are sor consists of a set of communicating tasks, shown in Figure 1(a) not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to (details of the 802.11 MAC protocol are available in [12]). For out- republish, to post on servers or to redistribute to lists, requires prior specific going frames, the LLC task receives frames from the Logical Link permission and/or a fee. Control Layer. The Wired Equivalent Privacy (WEP) task and the In- DAC 2005, June 13–17, 2005, Anaheim, California, USA. tegrity Checksum Vector (ICV) task work in conjunction to encrypt Copyright 2005 ACM 1-59593-058-2/05/0006 ...$5.00. and compute a checksum of the frame, respectively. The HDR task 571 From Logical Link To 3.2 Dynamic Topology Configuration Ctrl PHY WEP_ WEP_ We next consider the execution of the IEEE 802.11 MAC processor LLC HDR FCS PLI INIT ENCRYPT under two variants of the FLEXBUS architecture. ICV Frame Example 3: We first assume that the keys are statically configured. MAC_ Data TKIP ICV CTRL Under FLEXBUS, the architecture operates in a multiple bus mode To carrier Control during intervals that exhibit localized communications, and hence en- (a) sense Signals ables concurrent processing of the WEP and FCS tasks. In intervals AHB WEP FCS ARM946ES that require low latency communications between components located Arbiter AHB I/F AHB I/F AHB I/F AHB I/F on different bus segments, a dynamic bridge by-pass mechanism tem- AHB porarily transforms the architecture into a single shared bus topology. The measured data rate under this architecture was 248 Mbps, a 23% AHB I/F AHB I/F AHB I/F To LLC Frame Key (b) PLI PHY improvement over the best statically configured architecture. Buf Buffer The bridge by-pass mechanism provides a technique to make AHB_1 AHB_2 coarse-grained (system level) changes to the communication archi- WEP ARM946ES FCS Arbiter Arbiter tecture topology. However, at times, the ability to make more fine- AHB I/F AHB I/F AHB I/F AHB I/F AHB I/F AHB I/F AHB_1 AHB_2 grained (component level) changes to the topology is also important, Bridge as illustrated next. AHB I/F AHB I/F AHB I/F AHB I/F AHB I/F LLC Frame Key Frame Example 4: When the TKIP task is enabled, the FLEXBUS architec- PLI Buf_1 Buffer Buf_2 (c) ture with dynamic bridge by-pass achieves a data rate of 208 Mbps, which is 18% better than the best conventional architecture. We next Figure 1: IEEE 802.11 MAC processor:(a) functional specifica- consider the application of a dynamic component re-mapping capabil- tion, mapping to a (b) single shared, (c) multiple bus architecture ity. Using this technique, while the overall architecture remains in the multiple bus configuration, the mapping of the slave Frame buf2 is generates the MAC header. The Frame Check Sequence (FCS) task dynamically switched between the two buses. It is mapped to AHB 2 computes a CRC-32 checksum over the encrypted frame and header. as long as the FCS and WEP components are processing frame data, MAC CTRL implements the
Recommended publications
  • System Buses EE2222 Computer Interfacing and Microprocessors
    System Buses EE2222 Computer Interfacing and Microprocessors Partially based on Computer Organization and Architecture by William Stallings Computer Electronics by Thomas Blum 2020 EE2222 1 Connecting • All the units must be connected • Different type of connection for different type of unit • CPU • Memory • Input/Output 2020 EE2222 2 CPU Connection • Reads instruction and data • Writes out data (after processing) • Sends control signals to other units • Receives (& acts on) interrupts 2020 EE2222 3 Memory Connection • Receives and sends data • Receives addresses (of locations) • Receives control signals • Read • Write • Timing 2020 EE2222 4 Input/Output Connection(1) • Similar to memory from computer’s viewpoint • Output • Receive data from computer • Send data to peripheral • Input • Receive data from peripheral • Send data to computer 2020 EE2222 5 Input/Output Connection(2) • Receive control signals from computer • Send control signals to peripherals • e.g. spin disk • Receive addresses from computer • e.g. port number to identify peripheral • Send interrupt signals (control) 2020 EE2222 6 What is a Bus? • A communication pathway connecting two or more devices • Usually broadcast (all components see signal) • Often grouped • A number of channels in one bus • e.g. 32 bit data bus is 32 separate single bit channels • Power lines may not be shown 2020 EE2222 7 Bus Interconnection Scheme 2020 EE2222 8 Data bus • Carries data • Remember that there is no difference between “data” and “instruction” at this level • Width is a key determinant of performance • 8, 16, 32, 64 bit 2020 EE2222 9 Address bus • Identify the source or destination of data • e.g. CPU needs to read an instruction (data) from a given location in memory • Bus width determines maximum memory capacity of system • e.g.
    [Show full text]
  • Input/Output
    Lectures 24: Input/Output I. External devices A. External devices are not connected directly to the system bus because they have a wide range of control logics, as well as data transfer speeds and formats. B. Virtually all external devices have buffers, control signals, status signals, and data bits. C. Those that deal with other forms of energy have transducers that converts from non-electrical data to electrical data, (e.g. key press to ASCII in a keyboard), or electrical data to non-electrical data (e.g. bytes to light in a monitor). D. In the past, intra-system (less than 1 meter) connections were usually parallel, and inter-system were serial. Now almost all are serial. 1. To convert from parallel to serial use a shift register. 2. USB (universal serial bus) is a common standard for serial transmission with USB 3.0 transferring at 5Gb/s. II. I/O Modules (south bridge and north bridge on a PC) on the motherboard provide the logic, buffers, error detection, and ports to communicate with the external devices on one side, and a system-type bus on the other. For external device interfaces, the modules have data, status, and control lines. For the system bus they have data, address, and control lines. The south bridge handles slower I/O devices and is connected to the north bridge rather than the system bus directly. III. Programmed I/O A. Overview of Programmed I/O waits for the processor to query it. B. Four types of I/O commands: 1) control (e.g.
    [Show full text]
  • PDP-11 Bus Handbook (1979)
    The material in this document is for informational purposes only and is subject to change without notice. Digital Equipment Corpo­ ration assumes no liability or responsibility for any errors which appear in, this document or for any use made as a result thereof. By publication of this document, no licenses or other rights are granted by Digital Equipment Corporation by implication, estoppel or otherwise, under any patent, trademark or copyright. Copyright © 1979, Digital Equipment Corporation The following are trademarks of Digital Equipment Corporation: DIGITAL PDP UNIBUS DEC DECUS MASSBUS DECtape DDT FLIP CHIP DECdataway ii CONTENTS PART 1, UNIBUS SPECIFICATION INTRODUCTION ...................................... 1 Scope ............................................. 1 Content ............................................ 1 UNIBUS DESCRIPTION ................................................................ 1 Architecture ........................................ 2 Unibus Transmission Medium ........................ 2 Bus Terminator ..................................... 2 Bus Segment ....................................... 3 Bus Repeater ....................................... 3 Bus Master ........................................ 3 Bus Slave .......................................... 3 Bus Arbitrator ...................................... 3 Bus Request ....................................... 3 Bus Grant ......................................... 3 Processor .......................................... 4 Interrupt Fielding Processor .........................
    [Show full text]
  • Upgrading and Repairing Pcs, 21St Edition Editor-In-Chief Greg Wiegand Copyright © 2013 by Pearson Education, Inc
    Contents at a Glance Introduction 1 1 Development of the PC 5 2 PC Components, Features, and System Design 19 3 Processor Types and Specifications 29 4 Motherboards and Buses 155 5 BIOS 263 UPGRADING 6 Memory 325 7 The ATA/IDE Interface 377 AND 8 Magnetic Storage Principles 439 9 Hard Disk Storage 461 REPAIRING PCs 10 Flash and Removable Storage 507 21st Edition 11 Optical Storage 525 12 Video Hardware 609 13 Audio Hardware 679 14 External I/O Interfaces 703 15 Input Devices 739 16 Internet Connectivity 775 17 Local Area Networking 799 18 Power Supplies 845 19 Building or Upgrading Systems 929 20 PC Diagnostics, Testing, and Maintenance 975 Index 1035 Scott Mueller 800 East 96th Street, Indianapolis, Indiana 46240 Upgrading.indb i 2/15/13 10:33 AM Upgrading and Repairing PCs, 21st Edition Editor-in-Chief Greg Wiegand Copyright © 2013 by Pearson Education, Inc. Acquisitions Editor All rights reserved. No part of this book shall be reproduced, stored in a retrieval Rick Kughen system, or transmitted by any means, electronic, mechanical, photocopying, Development Editor recording, or otherwise, without written permission from the publisher. No patent Todd Brakke liability is assumed with respect to the use of the information contained herein. Managing Editor Although every precaution has been taken in the preparation of this book, the Sandra Schroeder publisher and author assume no responsibility for errors or omissions. Nor is any Project Editor liability assumed for damages resulting from the use of the information contained Mandie Frank herein. Copy Editor ISBN-13: 978-0-7897-5000-6 Sheri Cain ISBN-10: 0-7897-5000-7 Indexer Library of Congress Cataloging-in-Publication Data in on file.
    [Show full text]
  • BUS and CACHE MEMORY ORGANIZATIONS for MULTIPROCESSORS By
    BUS AND CACHE MEMORY ORGANIZATIONS FOR MULTIPROCESSORS by Donald Charles Winsor A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Electrical Engineering) in The University of Michigan 1989 Doctoral Committee: Associate Professor Trevor N. Mudge, Chairman Professor Daniel E. Atkins Professor John P. Hayes Professor James O. Wilkes ABSTRACT BUS AND CACHE MEMORY ORGANIZATIONS FOR MULTIPROCESSORS by Donald Charles Winsor Chairman: Trevor Mudge The single shared bus multiprocessor has been the most commercially successful multiprocessor system design up to this time, largely because it permits the implementation of efficient hardware mechanisms to enforce cache consistency. Electrical loading problems and restricted bandwidth of the shared bus have been the most limiting factors in these systems. This dissertation presents designs for logical buses constructed from a hierarchy of physical buses that will allow snooping cache protocols to be used without the electrical loading problems that result from attaching all processors to a single bus. A new bus bandwidth model is developed that considers the effects of electrical loading of the bus as a function of the number of processors, allowing optimal bus configurations to be determined. Trace driven simulations show that the performance estimates obtained from this bus model agree closely with the performance that can be expected when running a realistic multiprogramming workload in which each processor runs an independent task. The model is also used with a parallel program workload to investigate its accuracy when the processors do not operate independently. This is found to produce large errors in the mean service time estimate, but still gives reasonably accurate estimates for the bus utilization.
    [Show full text]
  • The System Bus
    ELEC 379 : DESIGN OF DIGITAL AND MICROCOMPUTER SYSTEMS 1998/99 WINTER SESSION, TERM 1 The System Bus This lecture describes system buses. These are buses used to interface the CPU with memory and peripherals on separate PC cards. The ISA and PCI buses are used as examples. Introduction characteristics. We will look briefly at two examples of system To increase their flexibility, most general-purpose buses. The ISA (Industrial Standard Architecture) microcomputers include a system bus that allows bus is commonly used in IBM-PC compatibles and printed circuit boards (PCBs) containing memory or is one of the most widely-used system busses. The I/O devices to be connected to the CPU. This allows PCI (Peripheral Component Interconnect) bus is a microcomputer systems to be customized for differ- flexible high-performance peripheral bus that can ef- ent applications. The use of a standard bus also al- ficiently interconnect peripherals and processors of lows manufacturers to produce peripherals that will widely different speeds. work properly with other manufacturers’ computer systems. The system bus consists of a number of parallel Mechanical Characteristics conductors on a backplane or motherboard. There are a number of connectors or “slots” into which Low-cost consumer-grade buses use card-edge con- other PCBs containing memory and I/O devices can nectors to minimize the cost of the peripheral. The be plugged in. plug-in card has contact pads along the edges of the PCB. The motherboard has connectors on the moth- In most cases the system bus is very similar to the erboard that contact these pads.
    [Show full text]
  • DMA, System Buses and Peripheral Buses
    EECE 379 : DESIGN OF DIGITAL AND MICROCOMPUTER SYSTEMS 2000/2001 WINTER SESSION, TERM 1 DMA, System Buses and Peripheral Buses Direct Memory Access is a method of transferring data between peripherals and memory without using the CPU. After this lecture you should be able to: identify the advantages and disadvantages of using DMA and programmed I/O, select the most appropriate method for a particular application. System buses are the buses used to interface the CPU with memory and peripherals on separate PC cards. The ISA and PCI buses are used as examples. You should be able to state the most important specifications of both buses. Serial interfaces are typically used to connect computer systems and low-speed external peripherals such as modems and printers. Serial interfaces reduce the cost of interconnecting devices because the bits in a byte are time-multiplexed onto a single wire and connector. After this lecture you should be able to describe the operation and format of data and handshaking signals on an RS-232 interface. Parallel ports are also used to interface the CPU to I/O devices. Two common parallel port standards, the “Centron- ics” parallel printer port and the SCSI interface are described. After this lecture you should be able to design simple input and output I/O ports using address decoders, registers and tri-state buffers. Programmed I/O Approximately how many bus cycles will be required? Programmed I/O (PIO) refers to using input and out- put (or move) instructions to transfer data between Direct Memory Access (DMA) memory and the registers on a peripheral interface.
    [Show full text]
  • Application Note Ap-28
    APPLICATION AP-28 NOTE © Intel Corporation, 1977. PRICE $1.00 Related Intel Publications INTELLEC Microcomputer Development System Hardware Reference Manual,98-132. System 80/10 Microcomputer Hardware Reference Manual, 98-316. 8080 Microcomputer Systems User's Manual, 98-153. SBC 501 Direct Memory Access Controller Hardware Reference Manual, 98-294. The material in this Application Note is for informational purposes only and is subject to change without notice. Intel Corporation has made an effort to verify that the mate­ rial in this document is correct. However, Intel Corporation does not assume any respon­ sibility for errors that may appear in this document. The following are trademarks of Intel Corporation and may be used only to describe Intel Products: ICE-30 MCS ICE-80 MEGACHASSIS INSITE MICROAMP INTEL MULTIBUS INTELLEC PROMPT LIBRARY MANAGER UPI Contents INTRODUCTION. .. 1 INTEL MULTIBUS .............. , . .. .... 1 Intel MUL TIBUS Interfacing MULTIBUS SIGNAL DESCRIPTIONS .... OPERATING CHARACTERISTICS. .. 3 MULTIBUS INTERFACE CIRCUITS. ... ... 8 ADDRESS DECODING . .. 8 BUS DRIVERS. .. .. 9 CONTROL SIGNAL LOGIC. .. 9 GENERAL PURPOSE SLAVE INTERFACE. .......... , ............... 10 FUNCTIONAL/PROGRAMMING CHARACTERISTICS. .. 10 THEORY OF OPERATION ............. 11 USER SELECTABLE OPTIONS .......... 12 PROTOTYPING APPLICATIONS ........ 14 SUMMARy ............................ 14 APPENDIX A MULTIBUS PIN ASSIGNMENT .......... 15 APPENDIXB MULTIBUS DC REQUIREMENTS ........ 16 APPENDIXC GPSI INTERF ACE SCHEMATIC AND WIRE LIST. .. 17 APPENDIX D MECHANICAL SPECIFICATIONS ........ 18 Intel Corporation assumes no responsibility for the use of any circuitry other than circuitry embodied in an Intel product. No other circuit patent licenses are implied. INTRODUCTION share resources on the same bus. Actual transfers A significant measure of the power and flexibility via the bus, however, proceed asynchronously with of the Intel OEM Computer Product Line can be respect to the bus clock.
    [Show full text]
  • PCI Local Bus Specification
    PCI Local Bus Specification Revision 2.2 December 18, 1998 Revision 2.2 REVISION REVISION HISTORY DATE 1.0 Original issue 6/22/92 2.0 Incorporated connector and expansion board specification 4/30/93 2.1 Incorporated clarifications and added 66 MHz chapter 6/1/95 2.2 Incorporated ECNs and improved readability 12/18/98 7KH3&,6SHFLDO,QWHUHVW*URXSGLVFODLPVDOOZDUUDQWLHVDQGOLDELOLW\IRUWKHXVHRIWKLVGRFXPHQW DQGWKHLQIRUPDWLRQFRQWDLQHGKHUHLQDQGDVVXPHVQRUHVSRQVLELOLW\IRUDQ\HUURUVWKDWPD\DSSHDU LQ WKLV GRFXPHQW QRU GRHV WKH 3&, 6SHFLDO ,QWHUHVW *URXS PDNH D FRPPLWPHQW WR XSGDWH WKH LQIRUPDWLRQFRQWDLQHGKHUHLQ &RQWDFWWKH3&,6SHFLDO,QWHUHVW*URXSRIILFHWRREWDLQWKHODWHVWUHYLVLRQRIWKHVSHFLILFDWLRQ 4XHVWLRQVUHJDUGLQJWKH3&,VSHFLILFDWLRQRUPHPEHUVKLSLQWKH3&,6SHFLDO,QWHUHVW*URXSPD\EH IRUZDUGHGWR 3&,6SHFLDO,QWHUHVW*URXS 1(.DWKU\Q +LOOVERUR2UHJRQ 3KRQH ,QVLGHWKH86 2XWVLGHWKH86 )D[ HPDLO SFLVLJ#SFLVLJFRP http://www.pcisig.com ',6&/$,0(5 7KLV3&,/RFDO%XV6SHFLILFDWLRQLVSURYLGHGDVLVZLWKQRZDUUDQWLHVZKDWVRHYHU LQFOXGLQJDQ\ZDUUDQW\RIPHUFKDQWDELOLW\QRQLQIULQJHPHQWILWQHVVIRUDQ\SDUWLFXODU SXUSRVHRUDQ\ZDUUDQW\RWKHUZLVHDULVLQJRXWRIDQ\SURSRVDOVSHFLILFDWLRQRUVDPSOH 7KH3&,6,*GLVFODLPVDOOOLDELOLW\IRULQIULQJHPHQWRISURSULHWDU\ULJKWVUHODWLQJWRXVH RILQIRUPDWLRQLQWKLVVSHFLILFDWLRQ1ROLFHQVHH[SUHVVRULPSOLHGE\HVWRSSHORU RWKHUZLVHWRDQ\LQWHOOHFWXDOSURSHUW\ULJKWVLVJUDQWHGKHUHLQ $/3+$LVDUHJLVWHUHGWUDGHPDUNRI'LJLWDO(TXLSPHQW&RUSRUDWLRQ )LUH:LUHLVDWUDGHPDUNRI$SSOH&RPSXWHU,QF 7RNHQ5LQJDQG9*$DUHWUDGHPDUNVDQG36,%00LFUR&KDQQHO26DQG3&$7DUHUHJLVWHUHG WUDGHPDUNVRI,%0&RUSRUDWLRQ :LQGRZV06'26DQG0LFURVRIWDUHUHJLVWHUHGWUDGHPDUNVRI0LFURVRIW&RUSRUDWLRQ
    [Show full text]
  • The Motherboard ‐ Chapter #5
    The Motherboard ‐ Chapter #5 Amy Hissom Key Terms Advanced Transfer Cache (ATC)— A type of L2 cache contained within the Pentium processor housing that is embedded on the same core processor die as the CPU itself. Audio/modem riser (AMR) — A specification for a small slot on a motherboard to accommodate an audio or modem riser card. A controller on the motherboard contains some of the logic for the audio or modem functionality. Back side bus — The bus between the CPU and the L2 cache inside the CPU housing. Bus speed — The speed, or frequency, at which the data on the motherboard is moving. Communication and networking riser (CNR) — A specification for a small expansion slot on a motherboard that accommodates a small audio, modem, or network riser card. CISC (complex instruction set computing) — Earlier CPU type of instruction set. Cooler — A combination cooling fan and heat sink mounted on the top or side of a processor to keep it cool. Discrete L2 cache — A type of L2 cache contained within the Pentium processor housing, but on a different die, with a cache bus between the processor and the cache. Dual-voltage CPU — A CPU that requires two different voltages, one for internal processing and the other for I/O processing. Execution Trace Cache — A type of Level 1 cache used by some CPUs to hold decoded operations waiting to be executed. Expansion bus — A bus that does not run in sync with the system clock. EPIC (explicitly parallel instruction computing) — The CPU architecture used by the Intel Itanium chip that bundles programming instructions with instructions on how to use multiprocessing abilities to do two instructions in parallel.
    [Show full text]
  • What Is a DMA Controller?
    Direct Memory Access (DMA) For the execution of a computer program, it requires the synchronous working of more than one component of a computer. For example, Processors – providing necessary control information, address etc. buses – to transfer information and data to and from memory to I/O device etc. The interesting factor of the system would be the way it handles the transfer of information among processor, memory and I/O devices. Usually, processors control all the process of transferring data, right from initiating the transfer to the storage of data at the destination. This adds load on the processor and most of the time it stays in the ideal state, thus decreasing the efficiency of the system. To speed up the transfer of data between I/O devices and memory, DMA controller acts as station master. DMA controller transfers data with minimal intervention of the processor. What is a DMA Controller? The term DMA stands for direct memory access. The hardware device used for direct memory access is called the DMA controller. DMA controller is a control unit, part of I/O device’s interface circuit, which can transfer blocks of data between I/O devices and main memory with minimal intervention from the processor. The DMA transfers the data in three modes which include the following. a) Burst Mode: In this mode DMA handover the buses to CPU only after completion of whole data transfer. Meanwhile, if the CPU requires the bus it has to stay ideal and wait for data transfer. b) Cycle Stealing Mode: In this mode, DMA gives control of buses to CPU after transfer of every byte.
    [Show full text]
  • PDP-11/24 - Computer History Wiki
    PDP-11/24 - Computer History Wiki http://gunkies.org/wiki/PDP-11/24 PDP-11/24 From Computer History Wiki The PDP-11/24 was the last low-end UNIBUS PDP-11 system. It used the KDF11-U CPU. Like the earlier PDP-11/44, the -11/24 supported up to 4 Mbytes of main memory, using the Extended UNIBUS between the CPU and memory; all devices were attached to a semi-separate (see below) UNIBUS. An optional UNIBUS Map board provided access to all of memory for UNIBUS DMA devices; without it, the UNIBUS address space was statically mapped across to the low 248 Kbytes of EUB main memory, using a cross-connection path on the CPU board. Contents 1 UNIBUS Map 2 System bus structure 2.1 Implementation 3 Packaging 4 Memory configuration limits UNIBUS Map The optional UNIBUS Map board, the KT24, provided a path between the UNIBUS and the EUB; the KT24 was also implemented as a single hex board, the M7134. A set of 31 mapping register pairs in the KT24 mapped 8 Kbyte blocks of UNIBUS address space to any location within the 4 Mbyte main memory address space. If no KT24 was present, the CPU detected its absence, and turned on a set of drivers which gated the address from the UNIBUS through to the EUB. (Pin FE1 of the CPU slot, "UB to MA VIA UBMAP", is connected to a pull-up on the CPU board, and also runs to the slot where the KT24 is plugged in, where it is grounded if a KT24 is plugged in; that signal is used to generate "UB to MA", which is the cross-connection enable signal.) System bus structure As mentioned, the -11/24 used an EUB for the bus between the main memory and the CPU, and the UNIBUS for the bus between the CPU and devices.
    [Show full text]