Lecture 1: Introduction to Hardware Architecture Hierarchy of Computer

Total Page:16

File Type:pdf, Size:1020Kb

Lecture 1: Introduction to Hardware Architecture Hierarchy of Computer 11/5/2018 Lecture 1: introduction to Hardware Architecture Hierarchy of Computer Architecture 1 11/5/2018 Hardware Architecture The First Computer • ENIAC (Electronic Numerical Integrator And Calculator) • Developed at the University of Pennsylvania • Funded by the US Department of Defense • It became operational during the II world war • Built between 1943 and 1946 • Made public in 1946 • The designers were J.P. Eckert and J. Mauchly 4 2 11/5/2018 ENIAC 5 ENIAC • General purpose machine used for the calculation of artillery shooting tables • A "U" of 30 meters high 2m and 1m thick • 30 tons • 18,000 vacuum tubes (valves) • 20 10-digit registers (each register over half a meter) • 200 microsecs for an addition • 180 KW of consumption. When it was put into operation for the first time, the entire west district of Philadelphia went in black-out. • Manual programming through wires and switches • Data introduced through punched cards • Tedious and error-prone to program 6 3 11/5/2018 The Von Neumann Architecture • John Von Neumann was part of the ENIAC group • He wrote a memo based on ENIAC, proposing a computer called EDVAC (Electronic Discrete Variable Automatic Computer) • Turned to history, even if the ideas were of others • Key feature: stored program (in the central memo) • All computers are based on this model, now universally known as the Von Neuman Machine 7 The Von Neumann Architecture • This model is based on the following fundamental observations: • the computer must perform arithmetic operations more frequently, so it is reasonable that it has one or more specialized devices for such operations • the logical control of the device, i.e. the correct sequence with which to perform such operations, can be carried out more efficiently by a flexible central controller, which distinguishes between order of execution of the instructions (they depend on the problem to be solved, so they must be stored) and modalities of execution of the single instructions (they depend on the calculator, for which they are managed by the device itself) • the sequence of instructions and the data on which it operates must be stored in memory • data and results must be able to be exchanged with the outside through input / output units (input / output) • all the devices mentioned must be able to talk to each other through appropriate connections • Let's see briefly the resulting model 8 4 11/5/2018 The Von Neumann Architecture Executes the Connection/ instructions Communication (Central Processing Unit) Working Memory bus Hardware: the Von Neumann Architecture CPU (Central Processing Unit) is the brain of the computer, as it coordinates and manages all the (Central Processing Unit) various hardware devices for program execution. It is bus composed by: The control unit (CU) deals with the interpretation and execution of instructions. It is the component of a computer's CPU that directs the operation of the processor. It tells the computer's memory, arithmetic/logic unit and input and output devices how to respond to the instructions that have been sent to the processor. 5 11/5/2018 Hardware: the Von Neumann Architecture An Arithmetic Logic Unit (ALU) that is a (Central Processing Unit) combinational digital electronic circuit that performs arithmetic and bitwise operations on integer binary bus numbers. An ALU is a fundamental building block of many types of computing circuits, including the CPU of computers, FPUs, and graphics processing units (GPUs). A single CPU, FPU or GPU may contain multiple ALUs. Several Registers - A processor register is a quickly accessible location available to a computer's CPU. Registers usually consist of a small amount of fast storage, although some registers have specific hardware functions, and may be read-only or write- only. An example of register is the Program Counter. Hardware: the Von Neumann Architecture (Central Processing Unit) bus A clock that scans the time intervals in which the devices inside the C.P.U. It determines the speed of the C.P.U., expressed as the frequency or number of intervals scanned in the unit of time (eg, 512MHz, 1GHz, 2GHz ...) Some examples of CPU: Intel® Core ™ I7, AMD® Phenom II, 6 11/5/2018 CPU state Logic and CPU Address of the next Flag: C, Z, S, O Arithmetic instruction operations Status registry (PSW) Program Counter (PC) A Interrupt Registry ALU (INTR) B Operands Registries Instruction Registry (IR) Other Registries Instruction in execution Control Unit Clock (CU) Control: Synchronization -Fetch -decode Data Registry Addresses Registry (DTR) (MAR) -execute Word to read from or write in the Memory Memory address to access 13 Hardware: the Von Neumann Architecture Random-access memory (RAM) is a form of computer data storage that stores data and machine (Central Processing Unit) code currently being used. A random-access memory device allows data items to be read or written in bus almost the same amount of time irrespective of the physical location of data inside the memory. It stores the instructions (in machine language) that must be performed and the data on which they operate. It has the characteristic of being: volatile: its contents are lost when the computer is turned off fast (nanosecond order, ie 10-9 seconds) expensive medium-small size, from a few hundred megabytes (eg 512MB in mobile devices) to a few gigabytes (desktop and server machines) 7 11/5/2018 RAM Volatile h bit word Data and instructions 0 1 Address Registry (MAR) Data Registry (DTR) store k bit load h bit Cell Address Data to read or write 2k-1 Space of addressing 2k cells 15 Hardware: the Von Neumann Architecture The input/output controller (alternatively referred to (Central Processing Unit) as an input/output interface, IOC, or PIOC for bus Peripheral input/output controller) is a device that interfaces between an input or output device and the computer or hardware device. The input/output controller on a computer is commonly located on the motherboard. 8 11/5/2018 I/O Device: Mass Memory • Although it is a key component, it is not part of the Von Neumann machine in the strict sense (Central Processing Unit) • It consists of hard disks, tapes, CDs and DVD ROMs, ... bus • Compared to the central memory it has the characteristic of being • non-volatile • slow (for hard disk order of milliseconds, i.e., 10-3 seconds) • economic • large (for hard drives hundreds of gigabytes, a few terabytes) • In contrast to RAM, the time required to read and write data items varies significantly depending on their physical locations on the recording medium, due to mechanical limitations such as media rotation speeds and arm movement. 17 I/O Device (Central Processing Unit) • They allow the communication of the bus computer with the outside and in particular the reading of data in input and the return of the results of the elaborations in output. • These include terminals (keyboard and screen), mice, printers, scanners, ... 18 9 11/5/2018 System Bus (Central Processing Unit) • Allows communication between the various bus components • It is subdivided into • Data bus: for data transmission • Address bus: for the transmission of central memory addresses • Control bus: for the transmission of commands to the various units and control information 19 10.
Recommended publications
  • Computer Architectures
    Computer Architectures Central Processing Unit (CPU) Pavel Píša, Michal Štepanovský, Miroslav Šnorek The lecture is based on A0B36APO lecture. Some parts are inspired by the book Paterson, D., Henessy, V.: Computer Organization and Design, The HW/SW Interface. Elsevier, ISBN: 978-0-12-370606-5 and it is used with authors' permission. Czech Technical University in Prague, Faculty of Electrical Engineering English version partially supported by: European Social Fund Prague & EU: We invests in your future. AE0B36APO Computer Architectures Ver.1.10 1 Computer based on von Neumann's concept ● Control unit Processor/microprocessor ● ALU von Neumann architecture uses common ● Memory memory, whereas Harvard architecture uses separate program and data memories ● Input ● Output Input/output subsystem The control unit is responsible for control of the operation processing and sequencing. It consists of: ● registers – they hold intermediate and programmer visible state ● control logic circuits which represents core of the control unit (CU) AE0B36APO Computer Architectures 2 The most important registers of the control unit ● PC (Program Counter) holds address of a recent or next instruction to be processed ● IR (Instruction Register) holds the machine instruction read from memory ● Another usually present registers ● General purpose registers (GPRs) may be divided to address and data or (partially) specialized registers ● SP (Stack Pointer) – points to the top of the stack; (The stack is usually used to store local variables and subroutine return addresses) ● PSW (Program Status Word) ● IM (Interrupt Mask) ● Optional Floating point (FPRs) and vector/multimedia regs. AE0B36APO Computer Architectures 3 The main instruction cycle of the CPU 1. Initial setup/reset – set initial PC value, PSW, etc.
    [Show full text]
  • Microprocessor Architecture
    EECE416 Microcomputer Fundamentals Microprocessor Architecture Dr. Charles Kim Howard University 1 Computer Architecture Computer System CPU (with PC, Register, SR) + Memory 2 Computer Architecture •ALU (Arithmetic Logic Unit) •Binary Full Adder 3 Microprocessor Bus 4 Architecture by CPU+MEM organization Princeton (or von Neumann) Architecture MEM contains both Instruction and Data Harvard Architecture Data MEM and Instruction MEM Higher Performance Better for DSP Higher MEM Bandwidth 5 Princeton Architecture 1.Step (A): The address for the instruction to be next executed is applied (Step (B): The controller "decodes" the instruction 3.Step (C): Following completion of the instruction, the controller provides the address, to the memory unit, at which the data result generated by the operation will be stored. 6 Harvard Architecture 7 Internal Memory (“register”) External memory access is Very slow For quicker retrieval and storage Internal registers 8 Architecture by Instructions and their Executions CISC (Complex Instruction Set Computer) Variety of instructions for complex tasks Instructions of varying length RISC (Reduced Instruction Set Computer) Fewer and simpler instructions High performance microprocessors Pipelined instruction execution (several instructions are executed in parallel) 9 CISC Architecture of prior to mid-1980’s IBM390, Motorola 680x0, Intel80x86 Basic Fetch-Execute sequence to support a large number of complex instructions Complex decoding procedures Complex control unit One instruction achieves a complex task 10
    [Show full text]
  • What Do We Mean by Architecture?
    Embedded programming: Comparing the performance and development workflows for architectures Embedded programming week FABLAB BRIGHTON 2018 What do we mean by architecture? The architecture of microprocessors and microcontrollers are classified based on the way memory is allocated (memory architecture). There are two main ways of doing this: Von Neumann architecture (also known as Princeton) Von Neumann uses a single unified cache (i.e. the same memory) for both the code (instructions) and the data itself, Under pure von Neumann architecture the CPU can be either reading an instruction or reading/writing data from/to the memory. Both cannot occur at the same time since the instructions and data use the same bus system. Harvard architecture Harvard architecture uses different memory allocations for the code (instructions) and the data, allowing it to be able to read instructions and perform data memory access simultaneously. The best performance is achieved when both instructions and data are supplied by their own caches, with no need to access external memory at all. How does this relate to microcontrollers/microprocessors? We found this page to be a good introduction to the topic of microcontrollers and ​ ​ microprocessors, the architectures they use and the difference between some of the common types. First though, it’s worth looking at the difference between a microprocessor and a microcontroller. Microprocessors (e.g. ARM) generally consist of just the Central ​ ​ Processing Unit (CPU), which performs all the instructions in a computer program, including arithmetic, logic, control and input/output operations. Microcontrollers (e.g. AVR, PIC or ​ 8051) contain one or more CPUs with RAM, ROM and programmable input/output ​ peripherals.
    [Show full text]
  • The Von Neumann Computer Model 5/30/17, 10:03 PM
    The von Neumann Computer Model 5/30/17, 10:03 PM CIS-77 Home http://www.c-jump.com/CIS77/CIS77syllabus.htm The von Neumann Computer Model 1. The von Neumann Computer Model 2. Components of the Von Neumann Model 3. Communication Between Memory and Processing Unit 4. CPU data-path 5. Memory Operations 6. Understanding the MAR and the MDR 7. Understanding the MAR and the MDR, Cont. 8. ALU, the Processing Unit 9. ALU and the Word Length 10. Control Unit 11. Control Unit, Cont. 12. Input/Output 13. Input/Output Ports 14. Input/Output Address Space 15. Console Input/Output in Protected Memory Mode 16. Instruction Processing 17. Instruction Components 18. Why Learn Intel x86 ISA ? 19. Design of the x86 CPU Instruction Set 20. CPU Instruction Set 21. History of IBM PC 22. Early x86 Processor Family 23. 8086 and 8088 CPU 24. 80186 CPU 25. 80286 CPU 26. 80386 CPU 27. 80386 CPU, Cont. 28. 80486 CPU 29. Pentium (Intel 80586) 30. Pentium Pro 31. Pentium II 32. Itanium processor 1. The von Neumann Computer Model Von Neumann computer systems contain three main building blocks: The following block diagram shows major relationship between CPU components: the central processing unit (CPU), memory, and input/output devices (I/O). These three components are connected together using the system bus. The most prominent items within the CPU are the registers: they can be manipulated directly by a computer program. http://www.c-jump.com/CIS77/CPU/VonNeumann/lecture.html Page 1 of 15 IPR2017-01532 FanDuel, et al.
    [Show full text]
  • Hardware Architecture
    Hardware Architecture Components Computing Infrastructure Components Servers Clients LAN & WLAN Internet Connectivity Computation Software Storage Backup Integration is the Key ! Security Data Network Management Computer Today’s Computer Computer Model: Von Neumann Architecture Computer Model Input: keyboard, mouse, scanner, punch cards Processing: CPU executes the computer program Output: monitor, printer, fax machine Storage: hard drive, optical media, diskettes, magnetic tape Von Neumann architecture - Wiki Article (15 min YouTube Video) Components Computer Components Components Computer Components CPU Memory Hard Disk Mother Board CD/DVD Drives Adaptors Power Supply Display Keyboard Mouse Network Interface I/O ports CPU CPU CPU – Central Processing Unit (Microprocessor) consists of three parts: Control Unit • Execute programs/instructions: the machine language • Move data from one memory location to another • Communicate between other parts of a PC Arithmetic Logic Unit • Arithmetic operations: add, subtract, multiply, divide • Logic operations: and, or, xor • Floating point operations: real number manipulation Registers CPU Processor Architecture See How the CPU Works In One Lesson (20 min YouTube Video) CPU CPU CPU speed is influenced by several factors: Chip Manufacturing Technology: nm (2002: 130 nm, 2004: 90nm, 2006: 65 nm, 2008: 45nm, 2010:32nm, Latest is 22nm) Clock speed: Gigahertz (Typical : 2 – 3 GHz, Maximum 5.5 GHz) Front Side Bus: MHz (Typical: 1333MHz , 1666MHz) Word size : 32-bit or 64-bit word sizes Cache: Level 1 (64 KB per core), Level 2 (256 KB per core) caches on die. Now Level 3 (2 MB to 8 MB shared) cache also on die Instruction set size: X86 (CISC), RISC Microarchitecture: CPU Internal Architecture (Ivy Bridge, Haswell) Single Core/Multi Core Multi Threading Hyper Threading vs.
    [Show full text]
  • The ABC of Computing
    COMMENT BOOKS & ARTS teamed up with recent graduate Clifford DEPT Berry to develop the system that became S ON known as the Atanasoff–Berry Computer I (ABC). Built on a shoestring budget, the ECT LL O C simple ‘breadboard’ prototype that emerged L A contained significant innovations. These I included the use of vacuum tubes as the com- B./SPEC puting mechanism and operating memory; I V. L V. binary and logical calculation; serial com- I putation; and the use of capacitors as storage memory. By the summer of 1940, Smiley tells us, a second, more-developed prototype was UN STATE IOWA running and Atanasoff and Berry had writ- ten a 35-page manuscript describing it. Other people were working on similar devices. In the United Kingdom and at Princeton University in New Jersey, Turing was investigating practical outlets for the concepts in his 1936 paper ‘On Comput- able Numbers’. In London, British engineer Tommy Flowers was using vacuum tubes as electronic switches for telephone exchanges in the General Post Office. In Germany, Konrad Zuse was working on a floating-point calculator — albeit based on electromechani- cal technology — that would have a 64-word The 1940s Atanasoff–Berry Computer (ABC) was the first to use innovations such as vacuum tubes. storage capacity by 1941. Smiley weaves these stories into the narrative effectively, giving a BIOGRAPHY broad sense of the rich ecology of thought that burgeoned during this crucial period of technological and logical development. The Second World War changed every- The ABC of thing. Atanasoff left Iowa State to work in the Naval Ordnance Laboratory in Washing- ton DC.
    [Show full text]
  • Efficient Processing of Deep Neural Networks
    Efficient Processing of Deep Neural Networks Vivienne Sze, Yu-Hsin Chen, Tien-Ju Yang, Joel Emer Massachusetts Institute of Technology Reference: V. Sze, Y.-H.Chen, T.-J. Yang, J. S. Emer, ”Efficient Processing of Deep Neural Networks,” Synthesis Lectures on Computer Architecture, Morgan & Claypool Publishers, 2020 For book updates, sign up for mailing list at http://mailman.mit.edu/mailman/listinfo/eems-news June 15, 2020 Abstract This book provides a structured treatment of the key principles and techniques for enabling efficient process- ing of deep neural networks (DNNs). DNNs are currently widely used for many artificial intelligence (AI) applications, including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the- art accuracy on many AI tasks, it comes at the cost of high computational complexity. Therefore, techniques that enable efficient processing of deep neural networks to improve key metrics—such as energy-efficiency, throughput, and latency—without sacrificing accuracy or increasing hardware costs are critical to enabling the wide deployment of DNNs in AI systems. The book includes background on DNN processing; a description and taxonomy of hardware architectural approaches for designing DNN accelerators; key metrics for evaluating and comparing different designs; features of DNN processing that are amenable to hardware/algorithm co-design to improve energy efficiency and throughput; and opportunities for applying new technologies. Readers will find a structured introduction to the field as well as formalization and organization of key concepts from contemporary work that provide insights that may spark new ideas. 1 Contents Preface 9 I Understanding Deep Neural Networks 13 1 Introduction 14 1.1 Background on Deep Neural Networks .
    [Show full text]
  • The UNIVAC System, 1948
    5 - The WHAT*S YOUR PROBLEM? Is it the tedious record-keepin% and the arduous figure-work of commerce and industry? Or is it the intricate mathematics of science? Perhaps yoy problem is now considered im ossible because of prohibitive costs asso- ciated with co b methods of solution.- The UNIVAC* SYSTEM has been developed by the Eckert-Mauchly Computer to solve such problems. Within its scope come %fm%s as diverse as air trarfic control, census tabu- lakions, market research studies, insurance records, aerody- namic desisn, oil prospecting, searching chemical literature and economic planning. The UNIVAC COMPUTER and its auxiliary equipment are pictured on the cover and schematically pre- sented on the opposite page. ELECTRONS WORK FASTER.---- thousands of times faster ---- than re- lavs and mechanical parts. The mmuses the in- he&ently high speed *of the electron tube to obtain maximum roductivity with minimum equipment. Electrons workfaster %an ever before in the newly designed UNIVAC CO~UTER, in which little more than one-millionth of a second is needed to deal with a decimal d'igit. Coupled with this computer are magnetic tape records which can be read and classified while new records are generated at a rate of ten thousand decimal- digits per second. f AUTOMATIC OPERATION is the key to greater economies in the 'hand- ling of all sorts of information, both numerical and alpha- betic. For routine tasks only a small operating staff is re- -qured. Changing from one job to another is only a matter of a few minutes. Flexibilit and versatilit are inherent in the UNIVAC methoM o e ectronic *contro ma in9 use of an ex- tremely large storage facility for ttmemorizi@ instructions~S LOW MAINTENANCE AND HIGH RELIABILITY are assured by a design which draws on the technical skill of a group of engineers who have specialized in electronic computing techniques.
    [Show full text]
  • The Von Neumann Architecture of Computer Systems
    The von Neumann Architecture of Computer Systems http://www.csupomona.edu/~hnriley/www/VonN.html The von Neumann Architecture of Computer Systems H. Norton Riley Computer Science Department California State Polytechnic University Pomona, California September, 1987 Any discussion of computer architectures, of how computers and computer systems are organized, designed, and implemented, inevitably makes reference to the "von Neumann architecture" as a basis for comparison. And of course this is so, since virtually every electronic computer ever built has been rooted in this architecture. The name applied to it comes from John von Neumann, who as author of two papers in 1945 [Goldstine and von Neumann 1963, von Neumann 1981] and coauthor of a third paper in 1946 [Burks, et al. 1963] was the first to spell out the requirements for a general purpose electronic computer. The 1946 paper, written with Arthur W. Burks and Hermann H. Goldstine, was titled "Preliminary Discussion of the Logical Design of an Electronic Computing Instrument," and the ideas in it were to have a profound impact on the subsequent development of such machines. Von Neumann's design led eventually to the construction of the EDVAC computer in 1952. However, the first computer of this type to be actually constructed and operated was the Manchester Mark I, designed and built at Manchester University in England [Siewiorek, et al. 1982]. It ran its first program in 1948, executing it out of its 96 word memory. It executed an instruction in 1.2 milliseconds, which must have seemed phenomenal at the time. Using today's popular "MIPS" terminology (millions of instructions per second), it would be rated at .00083 MIPS.
    [Show full text]
  • Architectural Adaptation for Application-Specific Locality
    Architectural Adaptation for Application-Specific Locality Optimizations y z Xingbin Zhang Ali Dasdan Martin Schulz Rajesh K. Gupta Andrew A. Chien Department of Computer Science yInstitut f¨ur Informatik University of Illinois at Urbana-Champaign Technische Universit¨at M¨unchen g fzhang,dasdan,achien @cs.uiuc.edu [email protected] zInformation and Computer Science, University of California at Irvine [email protected] Abstract without repartitioning hardware and software functionality and reimplementing the co-processing hardware. This re- We propose a machine architecture that integrates pro- targetability problem is an obstacle toward exploiting pro- grammable logic into key components of the system with grammable logic for general purpose computing. the goal of customizing architectural mechanisms and poli- cies to match an application. This approach presents We propose a machine architecture that integrates pro- an improvement over traditional approach of exploiting grammable logic into key components of the system with programmable logic as a separate co-processor by pre- the goal of customizing architectural mechanisms and poli- serving machine usability through software and over tra- cies to match an application. We base our design on the ditional computer architecture by providing application- premise that communication is already critical and getting specific hardware assists. We present two case studies of increasingly so [17], and flexible interconnects can be used architectural customization to enhance latency tolerance to replace static wires at competitive performance [6, 9, 20]. and efficiently utilize network bisection on multiproces- Our approach presents an improvement over co-processing sors for sparse matrix computations. We demonstrate that by preserving machine usability through software and over application-specific hardware assists and policies can pro- traditional computer architecture by providing application- vide substantial improvements in performance on a per ap- specific hardware assists.
    [Show full text]
  • An Overview of Parallel Computing
    An Overview of Parallel Computing Marc Moreno Maza University of Western Ontario, London, Ontario (Canada) Chengdu HPC Summer School 20-24 July 2015 Plan 1 Hardware 2 Types of Parallelism 3 Concurrency Platforms: Three Examples Cilk CUDA MPI Hardware Plan 1 Hardware 2 Types of Parallelism 3 Concurrency Platforms: Three Examples Cilk CUDA MPI Hardware von Neumann Architecture In 1945, the Hungarian mathematician John von Neumann proposed the above organization for hardware computers. The Control Unit fetches instructions/data from memory, decodes the instructions and then sequentially coordinates operations to accomplish the programmed task. The Arithmetic Unit performs basic arithmetic operation, while Input/Output is the interface to the human operator. Hardware von Neumann Architecture The Pentium Family. Hardware Parallel computer hardware Most computers today (including tablets, smartphones, etc.) are equipped with several processing units (control+arithmetic units). Various characteristics determine the types of computations: shared memory vs distributed memory, single-core processors vs multicore processors, data-centric parallelism vs task-centric parallelism. Historically, shared memory machines have been classified as UMA and NUMA, based upon memory access times. Hardware Uniform memory access (UMA) Identical processors, equal access and access times to memory. In the presence of cache memories, cache coherency is accomplished at the hardware level: if one processor updates a location in shared memory, then all the other processors know about the update. UMA architectures were first represented by Symmetric Multiprocessor (SMP) machines. Multicore processors follow the same architecture and, in addition, integrate the cores onto a single circuit die. Hardware Non-uniform memory access (NUMA) Often made by physically linking two or more SMPs (or multicore processors).
    [Show full text]
  • Development of a Predictable Hardware Architecture Template and Integration Into an Automated System Design Flow
    Development Secure of Reprogramming a Predictable Hardware of Architecturea Network Template Connected and Device Integration into an Automated System Design Flow Securing programmable logic controllers MARCUS MIKULCAK MUSSIE TESFAYE KTH Information and Communication Technology Masters’ Degree Project Second level, 30.0 HEC Stockholm, Sweden June 2013 TRITA-ICT-EX-2013:138Degree project in Communication Systems Second level, 30.0 HEC Stockholm, Sweden Secure Reprogramming of a Network Connected Device KTH ROYAL INSTITUTE OF TECHNOLOGY Securing programmable logic controllers SCHOOL OF INFORMATION AND COMMUNICATION TECHNOLOGY ELECTRONIC SYSTEMS MUSSIE TESFAYE Development of a Predictable Hardware Architecture TemplateKTH Information and Communication Technology and Integration into an Automated System Design Flow Master of Science Thesis in System-on-Chip Design Stockholm, June 2013 TRITA-ICT-EX-2013:138 Author: Examiner: Marcus Mikulcak Assoc. Prof. Ingo Sander Supervisor: Seyed Hosein Attarzadeh Niaki Degree project in Communication Systems Second level, 30.0 HEC Stockholm, Sweden Abstract The requirements of safety-critical real-time embedded systems pose unique challenges on their design process which cannot be fulfilled with traditional development methods. To ensure their correct timing and functionality, it has been suggested to move the design process to a higher abstraction level, which opens the possibility to utilize automated correct-by-design development flows from a functional specification of the system down to the level of Multiprocessor Systems-on-Chip. ForSyDe, an embedded system design methodology, presents a flow of this kind by basing system development on the theory of Models of Computation and side-effect-free processes, making it possible to separate the timing analysis of computation and communication of process networks.
    [Show full text]