A Low Cost, Transputer Based Visual Display Processor G.J. Porter, B

Total Page:16

File Type:pdf, Size:1020Kb

A Low Cost, Transputer Based Visual Display Processor G.J. Porter, B Transactions on Information and Communications Technologies vol 3, © 1993 WIT Press, www.witpress.com, ISSN 1743-3517 A low cost, transputer based visual display processor G.J. Porter, B. Singh, S.K. Barton Department of Electronic and Electrical Engineering, University of Bradford, Richmond Road, Bradford, West Yorkshire, UK ABSTRACT As major computer systems and in particular parallel computing resources become more accessible to the engineer, it is becoming necessary to increase the performance of the attached video display devices at least pro-rata with those of the computational elements. To this end a number of device manufacturers have developed or are in the process of developing new display controllers, dedicated to the task of improving the display environments of the super computers in use today by engineers. There is however a price to be paid for this development, in terms of monetary cost and in the design effort involved in integrating the new technology into existing systems. This paper will present a solution to this problems. Firstly, by showing how to utilise available transputer technology to upgrade the display capability of existing systems by transferring some of the available processing power from the computational elements of the system to the display controller. Secondly, by utilising a low cost off-the-shelf Transputer Module based video display unit. The new video display processor utilises a three transputer pipeline formed from as Scan Converter Unit, a Span Encoder Unit and a Span Filler Unit, and can be further expanded by increasing the functionality and/or parallelism of each stage if required. 1. INTRODUCTION The continued expansion in the application of computer graphics to all aspects of the working life of engineers has placed ever increasing demands on display technology. Higher resolution and faster refresh rates are constantly required for new application programs, Graphic Environments such as X Transactions on Information and Communications Technologies vol 3, © 1993 WIT Press, www.witpress.com, ISSN 1743-3517 154 Applications of Supercomputers in Engineering Windows and MS Windows, and user written programs. It is not possible to continually upgrade the graphics cards and display devices utilised in computer systems, leading to a display system that is less capable than that required by engineers. As part of the BRAD3D real-time image generation program, a system was developed that would allow the utilisation of existing low cost, relatively slow display cards, in a high speed display unit, called a Visual Display Processor (VDP). This unit utilises the existing display device as a simple back-end to the new system, performing all the necessary pre-processing in a two stage intelligent front-end. Leaving the slower display card with the simple task of drawing the actual horizontal spans into the display memory. 1.1 The Brad3D Project The BRAD3D project began at the University of Bradford in 1983, with the development of the first real-time image generation systemfl"|. This was constructed from two interconnected Motorola MC68000 (later MC68020), processors and a custom VDP system. The resolution of the unit was 256 by 256 pixels and it could, in 1987, manipulate a 3-D image constructed from 400 vertices, 160 polygons, in realistic real-time, (40mS frame interval). This was later improved by the addition of the XTAR[2] Graphics Co-processor and a four processor network, which increased the available resolution to 512 by 512 pixels and the processing rate to over 200 polygons in a 20mS frame interval. This was an eight fold increase over the previous system. Further improvements were made to the structure of BRAD3D and in 1991 a bit-slice processor was added to remove some of the computation from the Motorola MC68020 devices. This improved the through-put of the system but also highlighted its problems of extendability and flexibility. The system was seen as being inflexible and difficult to expand past its current size due to the fixed architecture of the dual VMEbus system employed. This instigated work on the current Transputer based system, described in greater detail in another offering to this conference, of which the VDP described in this paper is the last stage in the processing pipeline. 1.2 The Inmo$ Transputer The Inmos Transputer[3] is really a family of microprocessors, each sharing three basic facilities: a fast processor unit, on-chip memory and the ability to communicate bi-directionally with four other transputers. The transputer variant utilised in this project is the T800, which has both an integer (32bits) and a floating point (64bits) processor unit, giving it a sustained performance in excess of IMflop at 20MHz. To support this high computation rate the communication structure, via the serial links[4], can support a data rate of up to 2.3Mbytes per link. This together with the 4Kbytes of on-board memory and the availability of a hardware scheduler with a dual priority level, makes Transactions on Information and Communications Technologies vol 3, © 1993 WIT Press, www.witpress.com, ISSN 1743-3517 Applications of Supercomputers in Engineering 155 the T800 a useful processor on which to develop parallel programs. The FLOATING POINT PROCESSOR SYSTEM INTEC]ER SERVICES PROCESSOR LINK 0 4Kloyte SRAM LINK 1 TIMER LINK 2 EXTERNAL LINK 3 MEMORY INTERFACE EVEf\JT 32 bit Internal Hiahwav Figure 1.1 The Inmos T8QQ Transputer structure of the T800 is illustrated in figure 1.1. 1.3 The NT1 OOP Transputer System The Niche NT1000[5] is a multi-user transputer facility hosted by a SUN workstation. It is divided into four sites, each site being further divided into eight slots. One slot being able to accommodate a size one Transputer Module (Tram). An individual user has direct access to a single site, with up to eight processors if size one Trams are used. Alternatively, a user may utilise two or more sites, giving them access to a maximum of thirty two processors. Sites may not be split between users. Communication between processors within the same site is handled by a single IMSC004 Link Switch, as shown in figure 1.2. Each slot has two links, numbers 0 & 3, which can be configured to communicate with other similar slots within the same site. Alternatively, by using the Central Link Switch, a limited number of connections can be established between slots on different sites. Thus, most interprocessor communication paths can be established in this system. Transactions on Information and Communications Technologies vol 3, © 1993 WIT Press, www.witpress.com, ISSN 1743-3517 156 Applications of Supercomputers in Engineering t 1 SLOT 2 SLOT 2 SLOT 2 SLOT 2 SLOT 4 SLOT 4 SLOT 4 SLOT 4 SLOT 5 SLOT 5 SLOT 5 SLOT 5 SLOT 1 SLOT 1 SLOT 1 SLOT 1 SLOT 0 SLOT 0 SLOT 0 SLOT 0 SLOT 6 SLOT 6 SLOT 6 SLOT 6 SLOT 7 SLOT 7 SLOT 7 SLOT 7 SLOr 3 SLOT 3 SLOT 3 SLOT 3 SITE 0 SITE 1 SITE 2 SITE VMEbus INTERFACE Figure 1.2 Niche NT1000 Transputer System A single cycle or path exists between the processors in each site. This connects link 2 of each slot to link 1 of the next slot; with the slot that connects to the host machine using link 1 to establish this communication, and the slot that connects to the off-board interface using link 2. 1.4 Inmos B419 Graphics TRAM The IMSB419[6] is a graphics Tram, based on a single T800 transputer and a G300 Graphics Controller^?!. The structure of the unit is shown in figure 1.3. The transputer has access to two areas of memory, one to hold program & data and the second to act as a Frame Buffer for the image to be displayed. Both of these arrays of memory are 2Mbytes in size and are accessed on word boundaries in the transputers memory map. The G300 has access to the Frame Buffer memory via its Pixel Port. The frame buffer being constructed from Video RAMS, which have a high speed serial port that allows access to the memory at rates sufficient to drive a monitor at a horizontal resolution of 1024 pixels. Initial investigations of the B419 showed that to clear the screen when a screen size of 640 by 480 pixels took nearly 37mS when using the supplied CGI Transactions on Information and Communications Technologies vol 3, © 1993 WIT Press, www.witpress.com, ISSN 1743-3517 Applications of Supercomputers in Engineering 157 graphics librariesfS] and was reduced to approximately 20mS when coded directly in assembly language and utilising the fast block MOVE[91 instructions. Upon Link 0..3 SYNC Figure 1.3 Inmos B419 Graphics Tram examining the timings of the dynamic rams used for the video memory, it could be seen that the 20mS performance figures were extremely close to the maximum obtainable due to the restricted memory bandwidth. The results of the initial tests, showed clearly that it was not possible to use the standard draw to Frame Buffer approach of clearing the buffer to a background attribute, and then writing each polygon to the memory in the order in which it was scan converted. Normally when this type of Painters algorithm is employed, the time to clear the video memory to a background attribute represents a small percentage of the available frame time, nominally 10-20%. Whilst in this case it was almost 60% of the frame time. Consequently, the three processor system described in this paper was developed, in an attempt to overcome the shortcomings in the design of the B419 graphics tram. 1.5 Overview The VDP developed to overcome the problems defined in section 1.4 above, is divided into three functional processes each mapped onto a single transputer.
Recommended publications
  • Inmos T800 Transputer)
    PARALLEL ALGORITHM DEVELOPMENT FOR A SPECIFIC MACHINE (INMOS T800 TRANSPUTER) BY WILLIAM STAUB 1 This paper will desribe from start to finish the parallel algorithm program development for a specific transputer (Inmos T800) network. It is essential to understand how the hardware manages parallel tasks and how information is exchanged between the transputers before writing a parallel program. A transputer (figure1, page3) is a circuit containing a processor, some memory to store programs and data, and several ports for exchanging, or transferring information with other transputers or with the outside world. By designing these circuits so that they could be connected together with the same simplicity with which transistors can be in a computer, the transputer was born. One of the most important factor was the introduction of a high-level language, occam [MAY83], whose features were directly supported by this transputer’s hardware and that made the transputer a building block for parallel computers. However a Locical System C compiler was developed for wider know usage. A prominent factor to utilizing this circuitry was the ease with which transputers could be connected to each other with as little as a few electrical wires. The four bi- directional input/output (I/O) ports of the transputer are designed to interface directly with the ports of other transputers, his feature allows for several transputers to fit on a small footprint, with very little extra logic circuits, making it possible to easily fit four transputers with some memory on a PC daughter board (ISA bus). 2 Figure 1: Transputer block-diagram and examples of interconnection networks.
    [Show full text]
  • A Distributed Transputer-Based Architecture Using a Reconfigurable Synchronous Communication Protocol
    A DISTRIBUTED TRANSPUTER-BASED ARCHITECTURE USING A RECONFIGURABLE SYNCHRONOUS COMMUNICATION PROTOCOL Marie Josette Brigitte Vachon B.A.Sc. Royal Military College, Kingston, 1984 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE in THE FACULTY OF GRADUATE STUDIES DEPARTMENT OF ELECTRICAL ENGINEERING We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA August 1989 © Marie Josette Brigitte Vachon s 1989 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of E^(Lm\OAL. k)ZF.P\*iU The University of British Columbia Vancouver, Canada Date IC AVC I J8 7 DE-6 (2/88) Abstract A reliable, reconfigurable, and expandable distributed architecture supporting both bus and point- to-point communication for robotic applications is proposed. The new architecture is based on Inmos T800 microprocessors interconnected through crossbar switches, where communication between nodes takes place via point-to-point bidirectional links. Software development is done on a host computer, Sun 3/280, and the executable code is downloaded to the distributed architecture via the bus. Based on this architecture, an operating system has been designed to provide communication and input/output support.
    [Show full text]
  • Design of a Neural Network Simulator on a Transputer Array
    DESIGN OF A NEURAL NETWORK SIMULATOR ON A TRANSPUTER ARRAY Gary Mclntire James Villarreal, Paul Baffes, and Advanced Systems Engineering Dept. Ford Monica Rua Artificial Intelligence Section, Aerospace, Houston, TX. NASNJohnson Space Center, Houston, TX. Abstract they will be commercially available. Even if they were available today, they would be A high-performance simulator is being built generally unsuitable for our work because to support research with neural networks. All they are very difficult (usually impossible) to of our previous simulators have been special reconfigure programmably. A non-hardwired purpose and would only work with one or two VLSl neural network chip does not exist today types of neural networks. The primary design but probably will exist within a year or two. If goal of this simulator is versatility; it should done correctly, this would be ideal for our be able to simulate all known types of neural simulations. But the state of the art in networks. Secondary goals, in order of reconfigurable simulators are supercomputers importance, are high speed, large capacity, and and parallel processors. We have a very fast ease of use. A brief summary of neural simulator running on the SX-2 supercomputer networks is presented herein which (200 times faster than our VAX 11/780 concentrates on the design constraints simulator), but supercomputer CPU time is imposed. Major design issues are discussed yfzy costly. together with analysis methods and the chosen For the purposes of our research, solutions. several parallel processors were investigated Although the system will be capable of including the Connection Machine, the BBN running on most transputer architectures, it Butterfly, and the Ncube and Intel Hypercubes.
    [Show full text]
  • The Helios Operating System
    The Helios Operating System PERIHELION SOFTWARE LTD May 1991 COPYRIGHT This document Copyright c 1991, Perihelion Software Limited. All rights reserved. This document may not, in whole or in part be copied, photocopied, reproduced, translated, or reduced to any electronic medium or machine readable form without prior consent in writing from Perihelion Software Limited, The Maltings, Charlton Road, Shepton Mallet, Somerset BA4 5QE. UK. Printed in the UK. Acknowledgements The Helios Parallel Operating System was written by members of the He- lios group at Perihelion Software Limited (Paul Beskeen, Nick Clifton, Alan Cosslett, Craig Faasen, Nick Garnett, Tim King, Jon Powell, Alex Schuilen- burg, Martyn Tovey and Bart Veer), and was edited by Ian Davies. The Unix compatibility library described in chapter 5, Compatibility,im- plements functions which are largely compatible with the Posix standard in- terfaces. The library does not include the entire range of functions provided by the Posix standard, because some standard functions require memory man- agement or, for various reasons, cannot be implemented on a multi-processor system. The reader is therefore referred to IEEE Std 1003.1-1988, IEEE Stan- dard Portable Operating System Interface for Computer Environments, which is available from the IEEE Service Center, 445 Hoes Lane, P.O. Box 1331, Pis- cataway, NJ 08855-1331, USA. It can also be obtained by telephoning USA (201) 9811393. The Helios software is available for multi-processor systems hosted by a wide range of computer types. Information on how to obtain copies of the Helios software is available from Distributed Software Limited, The Maltings, Charlton Road, Shepton Mallet, Somerset BA4 5QE, UK (Telephone: 0749 344345).
    [Show full text]
  • 1 Architecture Reference Manual 2 T414 Transputer Product Data
    Contents 1 Architecture reference manual 2 T414 transputer product data 3 T212 transputer product data 4 C011 link adaptor product data 5 C012 link adaptor product data omrnos® Reference manual • transputer architecture 8, inmos, IMS and occam are trade marks of the INMOS Group of Companies. INMOS reserves the right to make changes in specifications at any time and without notice. The information furnished by INMOS in this publication is believed to be accurate; however no responsibility is assumed for its use, nor for any infringements of patents or other rights of third parties resulting from its use. No licence is granted under any patents, trademarks or other rights of the INMOS Group of Companies. Copyright INMOS Limited, 1986 72 TRN 048 02 2 Contents 1 Introduction 5 1.1 Overview 6 1.2 Rationale 7 1.2.1 System design 7 1.2.2 Systems architecture 8 1.2.3 Communication 9 2 Occam model 11 2.1 Overview 11 Occam overview 12 2.2.1 Processes 12 2.2.2 Constructs 13 2.2.3 Repetition 15 2.2.4 Replication 15 2.2.5 Types 16 2.2.6 Declarations, arrays and subscripts 16 2.2.7 Procedures 17 2.2.8 Expressions 17 2.2.9 Timer 17 2.2.10 Peripheral access 18 2.3 Configuration 18 3 Error handling 19 ~------------------------------------------------------~-- _4___ P_rogram_ develo-'-p_m_e_n_t__________ _ 20 4.1- Performance measurement 20 ------------------ ~~--~------~~--~-------------------~-- 4.2 Separate compilation of occam and other languages 20 4.3 Memory map and placement 21 5 Physical architecture 22 5.1 INMOS serial links 22 5.1.1 Overview 22 5.1.2 Electrical specification of links 22 ------5~.~2---=S-ys-t-e--m-se-r-v~ic-e-s-~-------------------------------~23 ---------- 5.2.1 Powering up and down, running and stopping 23 5.2.2 Clock distribution 23 5.3 Bootstrapping from ROM or from a link 23 ------5-.-4--P-e-r-ip-h-eral interfacing 24 6 Notation conventions 26 6.1 Signal naming conventions 26 7 Transputer product numbers 27 3 Preface This manual describes the architecture of the transputer family of products.
    [Show full text]
  • An Introduction to Transputers and Occam
    AN INTRODUCTION TO TRANSPUTERS AND OCCAM R.W. DOBINSON, D.R.N. JEFFERY AND I.M. WILLERS ECP DIVISION, CERN, 1211 GENEVA 23, SWITZERLAND Aims And Non Aims The aim of these lectures is to teach students how to write programs in the Occan language developed by INMOS to support parallel processing on Transputers. The approach taken will be to first introduce sequential features of Occam, comparing periodically with Fortran. We then go on to describe the parallel processing constructs contained in the language, explaining some of the things you definitely cannot do in Fortran but can do with Occam. These lectures combined with the practical exercises should allow participants to progress from a simple mono-processor program to quite complex multi-Transputer programs over the period of the school. What is a Transputer? A Transputer Fig (1) is a family of micro-computers which have the following components packaged on a single chip: An integer processor. 2 or 4 kbytes of fast static RAM. Four serial links which allow easy connection to other Transputers, Fig (2). An interface to external memory. A real time kernel for process scheduling. A timer. An on-chip floating point processor (on some Transputers). 52 OCR Output Floating Point Uni 32 bit System processor Services Link Services Tmm um Imedaca 4k bytes ot On-chip nmenm Link Intedaca External lmedaca Event F IG(1 ) T1 T2 T3 T4 T5 T6 T7 T8 T9 OCR Output FIG (2) 53 Performance The Transputer that you will use is the 20 MHz T80O chip it has a computational power of 1.5 to 3 Vax 11/780 equivalents.
    [Show full text]
  • A New Transputer Design for Fpgas
    Communicating Process Architectures 2005 335 Jan Broenink, Herman Roebbers, Johan Sunter, Peter Welch, and David Wood (Eds.) IOS Press, 2005 R16: a New Transputer Design for FPGAs John JAKSON Marlboro MA, USA [email protected], [email protected] Abstract. This paper describes the ongoing development of a new FPGA hosted Transputer using a Load Store RISC style Multi Threaded Architecture (MTA). The memory system throughput is emphasized as much as the processor throughput and uses the recently developed Micron 32MByte RLDRAM which can start fully random memory cycles every 3.3ns with 20ns latency when driven by an FPGA controller. The R16 shares an object oriented Memory Manager Unit (MMU) amongst multiple low cost Processor Elements (PEs) until the MMU throughput limit is reached. The PE has been placed and routed at over 300MHz in a Xilinx Virtex-II Pro device and uses around 500 FPGA basic cells and 1 Block RAM. The 15 stage pipeline uses 2 clocks per instruction to greatly simplify the hardware design which allows for twice the clock frequency of other FPGA processors. There are instruction and cycle accurate simulators as well as a C compiler in development. The compiler can now emit optimized small functions needed for further hardware development although compiling itself requires much work. Some occam and Verilog language components will be added to the C base to allow a mixed occam and event driven processing model. Eventually it is planned to allow occam or Verilog source to run as software code or be placed as synthesized co processor hardware attached to the MMU.
    [Show full text]
  • Transputers the Lost Architecture
    Transputers The Lost Architecture Bryan T. Meyers December 8, 2014 Bryan T. Meyers Transputers December 8, 2014 1 / 27 Table of Contents 1 What is a Transputer? History Architecture 2 Examples and Uses of Transputers Virtual Reality High-Performance Compute 3 Universal Compute Element Theory Flynn's Taxonomy Revisited 4 Legacy Bryan T. Meyers Transputers December 8, 2014 2 / 27 What is a Transputer? Table of Contents 1 What is a Transputer? History Architecture 2 Examples and Uses of Transputers Virtual Reality High-Performance Compute 3 Universal Compute Element Theory Flynn's Taxonomy Revisited 4 Legacy Bryan T. Meyers Transputers December 8, 2014 3 / 27 What is a Transputer? History What is a Transputer? Definition Transputer stands for "TRANSmitter and comPUTER", combines a computer Figure: Multi-Transputer Module processor with high-speed serial links. Features (T225) 16-bit Microprocessor 30 MHz Clock Four-phase Logic 4KB SRAM 4 serial links (5/10/20 Mb/s) Bryan T. Meyers Transputers December 8, 2014 4 / 27 What is a Transputer? History History Figure: T400 Processor Die History 1978 - INMOS created by British Government 1985 - First Transputer is Introduced 1989 - Transputers are the most widely RISC processor 1990 - Over 500,000 Transputers shipped Bryan T. Meyers Transputers December 8, 2014 5 / 27 What is a Transputer? History Models Table: Comparison of Popular Transputer Models Model Bits Clock (MHz) SRAM (KB) Link Speed (Mbps) Floating-Point T225 16 30 4 20 | T400 32 20 2 20 | T425 32 25 4 20 | T805 32 25 4 20 Yes (1=8 Speed) T9000 32 20 16 80 Yes (1=10 Speed) Bryan T.
    [Show full text]
  • The COSY–Kernel As an Example for Efficient Kernel Call Mechanisms on Transputers
    The COSY–Kernel as an Example for Efficient Kernel Call Mechanisms on Transputers Roger Butenuth Department of Informatics, University of Karlsruhe, Germany email: [email protected] Abstract. In this article, design issues for scalable operating systems suited to support efficient multiprogramming in large Transputer clusters are considered. Shortcom- ings of some current operating system approaches for Transputer based systems con- cerning efficiency, scalability, and multiprogramming support are discussed. After a brief overview of the new operating system COSY, the emphasis is laid on the design of the kernel entry mechanism which plays a key role for efficiency and which has not yet received the attention it deserves in the operating system literature. The kernel entry layer is an appropriate place to isolate most of the hardware dependent parts of a kernel. Its design is discussed in the context of an implementation on Transputers. 1 Introduction Most modern operating systems are build around a small kernel (often called microkernel), with all higher services provided by servers, which are used by one or more clients [4], [9], [12]. When client and server interact by sending messages only, this approach is well suited for those parallel architectures where transparent communication between nodes exists. Split- ting the operating system and the applications in small communicating processes enforces a clear structure and introduces many chances to exploit parallelism, but the large amount of communication operations strongly influences the performance of the whole system. Hence, communication should be as efficient as possible. In inter–node communication, linkspeed, topology, and the efficiency of the routing software (or hardware) limit bandwidth and latency, in intra–node communication, the speed of the memory interface limits the bandwidth, while latency depends on the implementation of the kernel.
    [Show full text]
  • Transputer Architecture
    TUD Department of VLSI-Design, Diagnostic and Architecture Department Seminar - Wed.12.Jun.2013 Transputer Architecture the Fascination of early, true Parallel Computing (1983) ___________________________ INF 1096 - 2:50pm-4:20pm Speaker: Dipl.-Ing. Uwe Mielke the Transputer & me … • I‘ve graduated 1984 at TU Ilmenau, • my Diploma thesis was about „the formal Petri-Net description and programming of a real time operating system kernel for embedded applications “ @ Z80 (8bit CPU). • Same time the Transputer appeared! • AMAZING !! Realtime in Silicon ! • No wonder that I liked to read such papers all over the years… • Since 2006 I‘m collecting Transputer infos & artefacts for their revitalization ☺ … Your questions? [email protected] Target for next 60 Min.s: …give you an idea about the impressive capabilities of the Transputer Architecture… “The Inmos Transputer was more than a family of processor chips; it was a concept, a new way of looking at system design problems. In many ways that concept lives on in the hardware design houses of today, using macrocells and programmable logic. New Intellectual Property (IP) design houses now specialise in the market the transputer originally addressed, but in many cases the multi-threaded software written for that hardware is still designed and written using the techniques of the earlier sequential systems.” [Co99] The Legacy of the transputer – Ruth IVIMEY-COOK, Senior Engineer, ARM Ltd, 90 Fulbourn Road, Cherry Hinton, Cambridge – in: Architectures, Languages and Techniques , B. M. Cook(ed.)
    [Show full text]
  • DEUTSCHES ELEKTRONEN-SYNCHROTRON DESY DESY 90-024 March 1990
    DEUTSCHES ELEKTRONEN-SYNCHROTRON DESY DESY 90-024 March 1990 An Introduction to Transputers T. Woeniger Deutsches EJeJctronen-SynrhrotroH DESY, Hamburg : i 2,'.?R. »S3Q L.,.».. ISSN 0418-9833 NOTKESTRASSE 85 - 2 HAMBURG 52 DESY behält sich alle Rechte für den Fall der Schutzrechtserteilung und für die wirtschaftliche Verwertung der in diesem Bericht enthaltenen Informationen vor. DESY reserves all rights for commercial use of information included in this report, especially in case of filing application for or grant of patents. To be sure that your preprints are promptly included in the HIGH ENERGY PHYSICS INDEX , send them to the following address ( if possible by air mail ) : DESY Bibliothek Notkestrasse 85 2 Hamburg 52 Germany ISSN 0418-9633 Contents l Transputers l 1.1 Hardware Structure of Transputers : 1.2 Software Structure of Transputers :: 1.3 Communication Between Processes ' 1.4 Additional Hardware for Transputers . 12 1.4.1 Crossbar Switch (C004) . ! = An Introdllction to Transputers 1-4/2 Link to Port Converter (C012) s ; 2 Performance of Transputers 14 2.1 The Links :-• 2.2 Copying Frorn Memory to Memory . 15 2.3 Memory and Link Actions in Parallel 16 2.4 Conclusion '•i 3 Transputer Hardware at ZEUS 19 Torsten Woeniger 3.1 The 2-Transputer VME Board . 19 DKSY - Hamburg 3.2 The Reset and Broadcast System 2i 4 AcknowlcMlgements March 12. 1990 22 Abstraft An effektive intet proressor cotinection is one of the major problems of the data arqui- sition Systems of modein high energy experiments. Transputers are fast microprocessors which can be easily interfaced with each othet via setial connections called links.
    [Show full text]
  • Eindhoven University of Technology MASTER Transputer Design and Simulation in Idass Van Hassel, M.C.J.M
    Eindhoven University of Technology MASTER Transputer design and simulation in IDaSS van Hassel, M.C.J.M. Award date: 1994 Link to publication Disclaimer This document contains a student thesis (bachelor's or master's), as authored by a student at Eindhoven University of Technology. Student theses are made available in the TU/e repository upon obtaining the required degree. The grade received is not published on the document as presented in the repository. The required complexity or quality of research of student theses may vary by program, and the required minimum study period may vary in duration. General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain Technische Universiteit tU) Eindhoven Faculty of Electrical Engineering Digital Information Systems Group Master's thesis: TRANSPUTER DESIGN AND SIMULATION IN IDASS lng. M. C.J.M. van Hassel Coach : Dr. ir. A.C. Verschueren Supervisor : Prof. ir. M.PJ. Stevens Period : September 1993 - August 1994 The Faculty of Electrical Engineering of Eindhoven University of Technology does not accept any responsibility regarding the contents of Master's theses. ABSTRACT This master thesis is the conclusion of my graduation period at Eindhoven University of Technology.
    [Show full text]