Computer Hardware Architecture Lecture 4
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
2.5 Classification of Parallel Computers
52 // Architectures 2.5 Classification of Parallel Computers 2.5 Classification of Parallel Computers 2.5.1 Granularity In parallel computing, granularity means the amount of computation in relation to communication or synchronisation Periods of computation are typically separated from periods of communication by synchronization events. • fine level (same operations with different data) ◦ vector processors ◦ instruction level parallelism ◦ fine-grain parallelism: – Relatively small amounts of computational work are done between communication events – Low computation to communication ratio – Facilitates load balancing 53 // Architectures 2.5 Classification of Parallel Computers – Implies high communication overhead and less opportunity for per- formance enhancement – If granularity is too fine it is possible that the overhead required for communications and synchronization between tasks takes longer than the computation. • operation level (different operations simultaneously) • problem level (independent subtasks) ◦ coarse-grain parallelism: – Relatively large amounts of computational work are done between communication/synchronization events – High computation to communication ratio – Implies more opportunity for performance increase – Harder to load balance efficiently 54 // Architectures 2.5 Classification of Parallel Computers 2.5.2 Hardware: Pipelining (was used in supercomputers, e.g. Cray-1) In N elements in pipeline and for 8 element L clock cycles =) for calculation it would take L + N cycles; without pipeline L ∗ N cycles Example of good code for pipelineing: §doi =1 ,k ¤ z ( i ) =x ( i ) +y ( i ) end do ¦ 55 // Architectures 2.5 Classification of Parallel Computers Vector processors, fast vector operations (operations on arrays). Previous example good also for vector processor (vector addition) , but, e.g. recursion – hard to optimise for vector processors Example: IntelMMX – simple vector processor. -
Chapter 5 Multiprocessors and Thread-Level Parallelism
Computer Architecture A Quantitative Approach, Fifth Edition Chapter 5 Multiprocessors and Thread-Level Parallelism Copyright © 2012, Elsevier Inc. All rights reserved. 1 Contents 1. Introduction 2. Centralized SMA – shared memory architecture 3. Performance of SMA 4. DMA – distributed memory architecture 5. Synchronization 6. Models of Consistency Copyright © 2012, Elsevier Inc. All rights reserved. 2 1. Introduction. Why multiprocessors? Need for more computing power Data intensive applications Utility computing requires powerful processors Several ways to increase processor performance Increased clock rate limited ability Architectural ILP, CPI – increasingly more difficult Multi-processor, multi-core systems more feasible based on current technologies Advantages of multiprocessors and multi-core Replication rather than unique design. Copyright © 2012, Elsevier Inc. All rights reserved. 3 Introduction Multiprocessor types Symmetric multiprocessors (SMP) Share single memory with uniform memory access/latency (UMA) Small number of cores Distributed shared memory (DSM) Memory distributed among processors. Non-uniform memory access/latency (NUMA) Processors connected via direct (switched) and non-direct (multi- hop) interconnection networks Copyright © 2012, Elsevier Inc. All rights reserved. 4 Important ideas Technology drives the solutions. Multi-cores have altered the game!! Thread-level parallelism (TLP) vs ILP. Computing and communication deeply intertwined. Write serialization exploits broadcast communication -
Supercomputing in Plain English: Overview
Supercomputing in Plain English An Introduction to High Performance Computing Part I: Overview: What the Heck is Supercomputing? Henry Neeman, Director OU Supercomputing Center for Education & Research Goals of These Workshops To introduce undergrads, grads, staff and faculty to supercomputing issues To provide a common language for discussing supercomputing issues when we meet to work on your research NOT: to teach everything you need to know about supercomputing – that can’t be done in a handful of hourlong workshops! OU Supercomputing Center for Education & Research 2 What is Supercomputing? Supercomputing is the biggest, fastest computing right this minute. Likewise, a supercomputer is the biggest, fastest computer right this minute. So, the definition of supercomputing is constantly changing. Rule of Thumb: a supercomputer is 100 to 10,000 times as powerful as a PC. Jargon: supercomputing is also called High Performance Computing (HPC). OU Supercomputing Center for Education & Research 3 What is Supercomputing About? Size Speed OU Supercomputing Center for Education & Research 4 What is Supercomputing About? Size: many problems that are interesting to scientists and engineers can’t fit on a PC – usually because they need more than 2 GB of RAM, or more than 60 GB of hard disk. Speed: many problems that are interesting to scientists and engineers would take a very very long time to run on a PC: months or even years. But a problem that would take a month on a PC might take only a few hours on a supercomputer. OU Supercomputing -
GPU Implementation Over IPTV Software Defined Networking
Esmeralda Hysenbelliu. Int. Journal of Engineering Research and Application www.ijera.com ISSN : 2248-9622, Vol. 7, Issue 8, ( Part -1) August 2017, pp.41-45 RESEARCH ARTICLE OPEN ACCESS GPU Implementation over IPTV Software Defined Networking Esmeralda Hysenbelliu* Information Technology Faculty, Polytechnic University of Tirana, Sheshi “Nënë Tereza”, Nr.4, Tirana, Albania Corresponding author: Esmeralda Hysenbelliu ABSTRACT One of the most important issue in IPTV Software defined Network is Bandwidth Issue and Quality of Service at the client side. Decidedly, it is required high level quality of images in low bandwidth and for this reason it is needed different transcoding standards (Compression of image as much as it is possible without destroying the quality of it) as H.264, H265, VP8 and VP9. During a test performed in SMC IPTV SDN Cloud Network, it was observed that with a server HP ProLiant DL380 g6 with two physical processors there was not possible to transcode in format H.264 more than 30 channels simultaneously because CPU’s achieved 100%. This is the reason why it was immediately needed to use Graphic Processing Units called GPU’s which offer high level images processing. After GPU superscalar processor was integrated and done functional via module NVENC of FFEMPEG Program, number of channels transcoded simultaneously was tremendous increased (more than 100 channels). The aim of this paper is to real implement GPU superscalar processors in IPTV Cloud Networks by achieving improvement of performance to more than 60%. Keywords - GPU superscalar processor, Performance Improvement, NVENC, CUDA --------------------------------------------------------------------------------------------------------------------------------------- Date of Submission: 01 -05-2017 Date of acceptance: 19-08-2017 --------------------------------------------------------------------------------------------------------------------------------------- I. -
Superscalar Fall 2020
CS232 Superscalar Fall 2020 Superscalar What is superscalar - A superscalar processor has more than one set of functional units and executes multiple independent instructions during a clock cycle by simultaneously dispatching multiple instructions to different functional units in the processor. - You can think of a superscalar processor as there are more than one washer, dryer, and person who can fold. So, it allows more throughput. - The order of instruction execution is usually assisted by the compiler. The hardware and the compiler assure that parallel execution does not violate the intent of the program. - Example: • Ordinary pipeline: four stages (Fetch, Decode, Execute, Write back), one clock cycle per stage. Executing 6 instructions take 9 clock cycles. I0: F D E W I1: F D E W I2: F D E W I3: F D E W I4: F D E W I5: F D E W cc: 1 2 3 4 5 6 7 8 9 • 2-degree superscalar: attempts to process 2 instructions simultaneously. Executing 6 instructions take 6 clock cycles. I0: F D E W I1: F D E W I2: F D E W I3: F D E W I4: F D E W I5: F D E W cc: 1 2 3 4 5 6 Limitations of Superscalar - The above example assumes that the instructions are independent of each other. So, it’s easily to push them into the pipeline and superscalar. However, instructions are usually relevant to each other. Just like the hazards in pipeline, superscalar has limitations too. - There are several fundamental limitations the system must cope, which are true data dependency, procedural dependency, resource conflict, output dependency, and anti- dependency. -
The Multiscalar Architecture
THE MULTISCALAR ARCHITECTURE by MANOJ FRANKLIN A thesis submitted in partial ful®llment of the requirements for the degree of DOCTOR OF PHILOSOPHY (Computer Sciences) at the UNIVERSITY OF WISCONSIN Ð MADISON 1993 THE MULTISCALAR ARCHITECTURE Manoj Franklin Under the supervision of Associate Professor Gurindar S. Sohi at the University of Wisconsin-Madison ABSTRACT The centerpiece of this thesis is a new processing paradigm for exploiting instruction level parallelism. This paradigm, called the multiscalar paradigm, splits the program into many smaller tasks, and exploits ®ne-grain parallelism by executing multiple, possibly (control and/or data) depen- dent tasks in parallel using multiple processing elements. Splitting the instruction stream at statically determined boundaries allows the compiler to pass substantial information about the tasks to the hardware. The processing paradigm can be viewed as extensions of the superscalar and multiprocess- ing paradigms, and shares a number of properties of the sequential processing model and the data¯ow processing model. The multiscalar paradigm is easily realizable, and we describe an implementation of the multis- calar paradigm, called the multiscalar processor. The central idea here is to connect multiple sequen- tial processors, in a decoupled and decentralized manner, to achieve overall multiple issue. The mul- tiscalar processor supports speculative execution, allows arbitrary dynamic code motion (facilitated by an ef®cient hardware memory disambiguation mechanism), exploits communication localities, and does all of these with hardware that is fairly straightforward to build. Other desirable aspects of the implementation include decentralization of the critical resources, absence of wide associative searches, and absence of wide interconnection/data paths. -
Embedded System Introduction.Pdf
EmbeddedEmbedded SystemSystem IntroductionIntroduction ANDES Confidential WWW.ANDESTECH.COM Embedded System vs Desktop Page 2 相同點 CPU Storage I/O 相異點 Desktop ‧可執行多種功能 ‧作業系統對於系統資源的管理較為複雜 Embedded System ‧執行特定功能 ‧作業系統對於系統資源的管理較為簡單 Page 3 SystemSystem LayerLayer Application Application Operating System Operating System Firmware Firmware Firmware Hardware Hardware Hardware Desktop computer Complex embedded Simple embedded computer computer Page 4 HardwareHardware ArchitectureArchitecture DesktopDesktop ComputerComputer SystemSystem HardwareHardware ArchitectureArchitecture CPU AGP Slot North Bridge Memory PCI Interface USB Interface South Bridge IDE Interface System BIOS ISA Interface Super IO Port Page 5 HardwareHardware ArchitectureArchitecture EmbeddedEmbedded SystemSystem ComputerComputer HardwareHardware ArchitectureArchitecture ADC CPU SPI Digital I/O ROM IIC Host RAM Timers Computer UART Bus Interface Memory Network Interface I/O Page 6 What is the Embedded System? Page 7 IntroductionIntroduction Challenges in embedded system design. Design methodologies. Page 8 EmbeddedEmbedded SystemSystem ?? •An embedded system is a special-purpose computer system designed to perform one or a few dedicated functions •with real-time computing constraints • include hardware, software and mechanical parts Page 9 EmbeddingEmbedding aa computercomputer Page 10 ComponentsComponents ofof anan embeddedembedded systemsystem Characteristics Low power Closed operating environment Cost sensitive Page 11 ComponentsComponents ofof anan embeddedembedded -
14 Los Alamos National Laboratory
14 Los Alamos National Laboratory Every year for the past 17 years, the director of Los Alamos alternative for assessing the safety, reliability, and perfor- National Laboratory has had a legally required task: write mance of the stockpile: virtual-world simulations. a letter—a personal assessment of Los Alamos–designed warheads and bombs in the U.S. nuclear stockpile. Th i s I, Iceberg letter is sent to the secretaries of Energy and Defense and to Hollywood movies such as the Matrix series or I, Robot the Nuclear Weapons Council. Th rough them the letter goes typically portray supercomputers as massive, room-fi lling to the president of the United States. machines that churn out answers to the most complex questions—all by themselves. In fact, like the director’s Th e technical basis for the director’s assessment comes from Annual Assessment Letter, supercomputers are themselves the Laboratory’s ongoing execution of the nation’s Stockpile the tip of an iceberg. Stewardship Program; Los Alamos’ mission is to study its portion of the aging stockpile, fi nd any problems, and address them. And for the past 17 years, the director’s letter has said, Without people, a supercomputer in eff ect, that any problems that have arisen in Los Alamos would be no more than a humble jumble weapons are being addressed and resolved without the need for full-scale underground nuclear testing. of wires, bits, and boxes. When it comes to the Laboratory’s work on the annual assess- Although these rows of huge machines are the most visible ment, the director’s letter is just the tip of the iceberg. -
Threading SIMD and MIMD in the Multicore Context the Ultrasparc T2
Overview SIMD and MIMD in the Multicore Context Single Instruction Multiple Instruction ● (note: Tute 02 this Weds - handouts) ● Flynn’s Taxonomy Single Data SISD MISD ● multicore architecture concepts Multiple Data SIMD MIMD ● for SIMD, the control unit and processor state (registers) can be shared ■ hardware threading ■ SIMD vs MIMD in the multicore context ● however, SIMD is limited to data parallelism (through multiple ALUs) ■ ● T2: design features for multicore algorithms need a regular structure, e.g. dense linear algebra, graphics ■ SSE2, Altivec, Cell SPE (128-bit registers); e.g. 4×32-bit add ■ system on a chip Rx: x x x x ■ 3 2 1 0 execution: (in-order) pipeline, instruction latency + ■ thread scheduling Ry: y3 y2 y1 y0 ■ caches: associativity, coherence, prefetch = ■ memory system: crossbar, memory controller Rz: z3 z2 z1 z0 (zi = xi + yi) ■ intermission ■ design requires massive effort; requires support from a commodity environment ■ speculation; power savings ■ massive parallelism (e.g. nVidia GPGPU) but memory is still a bottleneck ■ OpenSPARC ● multicore (CMT) is MIMD; hardware threading can be regarded as MIMD ● T2 performance (why the T2 is designed as it is) ■ higher hardware costs also includes larger shared resources (caches, TLBs) ● the Rock processor (slides by Andrew Over; ref: Tremblay, IEEE Micro 2009 ) needed ⇒ less parallelism than for SIMD COMP8320 Lecture 2: Multicore Architecture and the T2 2011 ◭◭◭ • ◮◮◮ × 1 COMP8320 Lecture 2: Multicore Architecture and the T2 2011 ◭◭◭ • ◮◮◮ × 3 Hardware (Multi)threading The UltraSPARC T2: System on a Chip ● recall concurrent execution on a single CPU: switch between threads (or ● OpenSparc Slide Cast Ch 5: p79–81,89 processes) requires the saving (in memory) of thread state (register values) ● aggressively multicore: 8 cores, each with 8-way hardware threading (64 virtual ■ motivation: utilize CPU better when thread stalled for I/O (6300 Lect O1, p9–10) CPUs) ■ what are the costs? do the same for smaller stalls? (e.g. -
Computer Architecture: Parallel Processing Basics
Computer Architecture: Parallel Processing Basics Onur Mutlu & Seth Copen Goldstein Carnegie Mellon University 9/9/13 Today What is Parallel Processing? Why? Kinds of Parallel Processing Multiprocessing and Multithreading Measuring success Speedup Amdhal’s Law Bottlenecks to parallelism 2 Concurrent Systems Embedded-Physical Distributed Sensor Claytronics Networks Concurrent Systems Embedded-Physical Distributed Sensor Claytronics Networks Geographically Distributed Power Internet Grid Concurrent Systems Embedded-Physical Distributed Sensor Claytronics Networks Geographically Distributed Power Internet Grid Cloud Computing EC2 Tashi PDL'09 © 2007-9 Goldstein5 Concurrent Systems Embedded-Physical Distributed Sensor Claytronics Networks Geographically Distributed Power Internet Grid Cloud Computing EC2 Tashi Parallel PDL'09 © 2007-9 Goldstein6 Concurrent Systems Physical Geographical Cloud Parallel Geophysical +++ ++ --- --- location Relative +++ +++ + - location Faults ++++ +++ ++++ -- Number of +++ +++ + - Processors + Network varies varies fixed fixed structure Network --- --- + + connectivity 7 Concurrent System Challenge: Programming The old joke: How long does it take to write a parallel program? One Graduate Student Year 8 Parallel Programming Again?? Increased demand (multicore) Increased scale (cloud) Improved compute/communicate Change in Application focus Irregular Recursive data structures PDL'09 © 2007-9 Goldstein9 Why Parallel Computers? Parallelism: Doing multiple things at a time Things: instructions, -
Advanced Engineering Mathematics
5 Linear Systems of ODEs 5.1 Systems of ODEs In a sense, Chapter 5 equals Chapter 2 “plus” Chapter 3, in the sense that Chapter 5 combines use of matrix theory and ordinary differential equation (ODE) methods. When we have more than one linear ODE, results from matrix theory turn out to be useful. Example 5.1 For the circuit shown in Figure 5.1, let v(t) be the voltage drop across the capacitor and I(t) be the loop current. The input V(t) is a given function. Assume, as usual, that L, R, and C are constants. Write down a system of ODEs in R2 that models this circuit. Method: The series RLC circuit shown in Figure 5.1 is analogous to the DC series RLC circuit discussed near the end of Section 3.3. The first ODE models the voltage drop across the capacitor being v(t) = 1 q(t),whereq(t) is the charge on the capacitor and C ˙ q˙(t) = I(t). The second ODE in the system is Kirchhoff’s voltage law, LI(t)+RI(t)+v(t) = V(t), after dividing through by L. The system is ⎧ ⎫ ⎨˙( ) = 1 ( ) ⎬ v t C I t ⎩ ⎭ . (5.1) ˙( ) = 1 ( ( ) − ( ) − ( )) I t L V t RI t v t More generally, consider a system of two ODEs in unknowns x1(t), x2(t): ⎧ ⎫ ˙ ⎨x1(t) = F1 t, x1(t), x2(t) ⎬ ⎩ ⎭ . (5.2) ˙ x2(t) = F2 t, x1(t), x2(t) A special case is ⎧ ⎫ ˙ ⎨x1(t) = a11(t)x1 + a12(t)x2 + f1(t)⎬ ⎩ ⎭ , (5.3) ˙ x2(t) = a21(t)x1 + a22(t)x2 + f2(t) which is called a linear system. -
Thread-Level Parallelism I
Great Ideas in UC Berkeley UC Berkeley Teaching Professor Computer Architecture Professor Dan Garcia (a.k.a. Machine Structures) Bora Nikolić Thread-Level Parallelism I Garcia, Nikolić cs61c.org Improving Performance 1. Increase clock rate fs ú Reached practical maximum for today’s technology ú < 5GHz for general purpose computers 2. Lower CPI (cycles per instruction) ú SIMD, “instruction level parallelism” Today’s lecture 3. Perform multiple tasks simultaneously ú Multiple CPUs, each executing different program ú Tasks may be related E.g. each CPU performs part of a big matrix multiplication ú or unrelated E.g. distribute different web http requests over different computers E.g. run pptx (view lecture slides) and browser (youtube) simultaneously 4. Do all of the above: ú High fs , SIMD, multiple parallel tasks Garcia, Nikolić 3 Thread-Level Parallelism I (3) New-School Machine Structures Software Harness Hardware Parallelism & Parallel Requests Achieve High Assigned to computer Performance e.g., Search “Cats” Smart Phone Warehouse Scale Parallel Threads Computer Assigned to core e.g., Lookup, Ads Computer Core Core Parallel Instructions Memory (Cache) >1 instruction @ one time … e.g., 5 pipelined instructions Input/Output Parallel Data Exec. Unit(s) Functional Block(s) >1 data item @ one time A +B A +B e.g., Add of 4 pairs of words 0 0 1 1 Main Memory Hardware descriptions Logic Gates A B All gates work in parallel at same time Out = AB+CD C D Garcia, Nikolić Thread-Level Parallelism I (4) Parallel Computer Architectures Massive array