
Introduction Principles Core Memory Processor Network Exascale Computer Architectures D. Pleiter Jülich Supercomputing Centre and University of Regensburg November 2018 1 / 179 Introduction Principles Core Memory Processor Network Exascale Overview Introduction Principles of Computer Architectures Processor Core Architecture Memory Architecture Processor Architectures Network Architecture Exascale Challenges 2 / 179 Introduction Principles Core Memory Processor Network Exascale Content Introduction Principles of Computer Architectures Processor Core Architecture Memory Architecture Processor Architectures Network Architecture Exascale Challenges 3 / 179 Introduction Principles Core Memory Processor Network Exascale Computational Science • Computational Science = Multi-disciplinary field that uses advanced computing capabilities to understand and solve complex problems • Key elements of computational science • Application science areas • Algorithms and mathematical methods • Computer science, including areas like • Computer architectures • Performance modelling 4 / 179 Introduction Principles Core Memory Processor Network Exascale Motivation • Know-how on computer architectures useful for application developer • How much could I optimize my application? • Which performance can I expect on a given architecture? • Which performance can I expect for a given algorithm? • Deep knowledge on computer architectures is required to develop application optimized computing infrastructures • Which hardware features are required by a given application? • Which are the optimal design parameters? 5 / 179 Introduction Principles Core Memory Processor Network Exascale Top500 List • Performance metric • Floating-point operations per time unit while solving a dense linear set of equations + High-Performance LINPACK (HPL) benchmark • Criticism • Workload not representative • Problem size can be freely tuned • But: Allows for long-term comparison • First list published in June 1993 • New, additional metric being established: HPCG • Workload: Sparse linear system solver based on CG 6 / 179 Introduction Principles Core Memory Processor Network Exascale Top500 Trend: Rmax 1e+09 Top 1 1e+08 Top 1-10 Top 1-100 1e+07 1e+06 1e+05 GFlops/s 10000 1000 100 June 1993 June 1998 June 2003 June 2008 June 2013 June 2018 Peak performance of top 100 systems doubled for many years every 13.5 months, now slowdown to about 26.8 months 7 / 179 Introduction Principles Core Memory Processor Network Exascale Content Introduction Principles of Computer Architectures Processor Core Architecture Memory Architecture Processor Architectures Network Architecture Exascale Challenges 8 / 179 Introduction Principles Core Memory Processor Network Exascale Problem Solving Abstraction • Numerical problem solving abstraction Formal or computational computer mathematical sequence of code solution algorithm tasks implementation • Execution of computer code = computation • What is computation? • Process of manipulating data according to simple rules • Let data be represented by w 2 f0; 1gn w (0) ! w (1) !···! w (i) !··· 9 / 179 Introduction Principles Core Memory Processor Network Exascale Computer Model: Deterministic Finite Automata (DFA) • Informal definition: Perform automatic operations on some given sequence of inputs front pad • Example: Automatic door • Inputs: door • Sensor front pad rear pad • Sensor rear pad • States: • Door open • Door closed 10 / 179 Introduction Principles Core Memory Processor Network Exascale Computer Model: DFA (cont.) A Deterministic Finite Automaton (DFA) is defined by • A finite set of states Q • Example: Q = fdoor open, door closedg • An alphabet Γ • An alphabet is a finite set of letters • Example: Γ = ffront, rear, both, neitherg • A transition function δ : Q × Γ ! Q • A start state q0 2 Q • A set of final states F ⊆ Q • Allows to define when a computation is successful 11 / 179 Introduction Principles Core Memory Processor Network Exascale Computer Model: DFA (cont.) Deterministic Finite Automaton (DFA) for automatic door: • Q = fdoor open, door closedg closed neither • Γ = ffront, rear, both, neitherg • Transistions: neither front,rear,both • fclosed,frontg !open • fopen,neitherg !closed • ::: open front,rear,both • q0 =closed • F = Q Graphical representation: Finite State Machine (FSM) diagram 12 / 179 Introduction Principles Core Memory Processor Network Exascale Computer Model: Turing Machine Turing Machine components: [Turing, 1936] • Tape divided into cells • Each cell contains a symbol from some finite alphabet, which includes the special symbol “blank” • The tape is assumed to infinite in any direction • Head that can • Read and write symbols on the tape, and • Move tape left and right one cell at a time • State register that stores the state of the Turing Machine • The number of states is finite • There is a special start state • Table including instructions • The number of instructions is finite • Chosen instruction depends on current state and input 13 / 179 Introduction Principles Core Memory Processor Network Exascale Computer Model: Turing Machine (cont.) • Notation 1011 q3 0101 • Left-hand side tape is 1011 • Current state is q3 • Head is right to state symbol, i.e. reading 0 • Right-hand side of tape is 0101 • Transition δ :(Q n F) × Γ ! Q × Γ × fL; Rg • Γ is the tape alphabet • Example state table for Q = fA,B,C,HALTg: Γ q = A q = B q = C Write Move State Write Move State Write Move State 0 1 R B 1 L A 1 L B 1 1 L C 1 R B 1 R HALT • Differences compared to DFA • A Turing Machine performs write operations • The read-write head can move both to the left and right 14 / 179 Introduction Principles Core Memory Processor Network Exascale Computer Model: Random-Access Machine (RAM) • Components • CPU C • PC: Program counter PC • IR: Instruction register • ACC: Accumulator IR • Randomly addressable memory M • Processing sequence CA • Load MEM[PC] to IR • Increment PC ACC • Execute instruction in IR • Repeat 15 / 179 Introduction Principles Core Memory Processor Network Exascale Computer Model: RAM (cont.) Example set of instructions: Instruction Meaning HALT Finish computations LD <a> Load value from memory ACC MEM[a] LDI <i> Load immediate value ACC i ST <a> Store value MEM[a] ACC ADD <a> Add value ACC ACC+MEM[a] ADDI <i> Add immediate value ACC ACC+i JUMP <i> Update program counter PC i JZERO <i> Update PC if ACC==0 PC i 16 / 179 Introduction Principles Core Memory Processor Network Exascale Computer Model: RAM (cont.) Example: Add numbers stored in address range [0x100, 0x102] PC Instruction Meaning 0x00 LD 0x100 Initiate accumulator 0x01 ADD 0x101 Add second number 0x02 ADD 0x102 Add third number 0x03 ST 0x103 Store result Benefits and limitations of RAM architecture: • Memory is randomly addressible • No direct support of memory address indirection • Work-around: Modify set of instructions during execution 17 / 179 Introduction Principles Core Memory Processor Network Exascale Von Neumann Architecture [Neumann, 1945] • Components defined by v. Neumann: • Central arithmetic CA CC • Central control CC • Memory M C • Input I • Output O M CA • Simplified modern view: • Central Processing Unit I • Memory • I/O chipset O • Bus Bus • Memory used for instructions and data + Self-modifying code 18 / 179 Introduction Principles Core Memory Processor Network Exascale Harvard Architecture • “Von Neumann bottleneck” M data • Only single path for loading instructions and data ! Bad processor utilization • General purpose memory replaced by C • Data memory • Instruction memory • Independend data pathes M instr + Concurrent access • Different address spaces 19 / 179 Introduction Principles Core Memory Processor Network Exascale Modern (Simple) Processor Architecture L1−I • Components • Instruction decoder and Decoder scheduler • Scheduler Execution pipelines • Register file • Register File Caches (L1-I, L1-D, L2) L2 • External memory Memory • Memory architecture • From outside: von Neumann • General purpose memory Pipeline #0 Pipeline #1 Pipeline #2 • Common L2 cache • From inside: Harvard • Data L1 cache (L1D) L1−D • Instruction L1 cache (L1P) 20 / 179 Introduction Principles Core Memory Processor Network Exascale Digression on Terminology • Event = Point of time ti where something happens • An event has no duration • Examples • Instruction execution completed • State changed • Counter reaches/exceeds a given value • Latency = Distance ∆t = tj − ti between two events where ti < tj • Also: response time • Examples • Time between instruction issue and completion • Duration of a task execution 21 / 179 Introduction Principles Core Memory Processor Network Exascale Digression on Terminology (cont.) • Bandwidth = Amount of items passing a particular interface per unit time MCdata • If transfer of l items has a latency ∆t then bandwidth is given by l b = ∆t • Equivalent to throughput = Amount of work W com- pleted per unit time • Nominal bandwidth = The maximal rate B at which data can be trans- ferred over a data bus 22 / 179 Introduction Principles Core Memory Processor Network Exascale Pipelining Optimize work using an assembly line: Weld car body Paint car Assemble motor Works only well if time needed at all Assemble wheels stations is similar 23 / 179 Introduction Principles Core Memory Processor Network Exascale Pipeline: Time Diagram P0 P1 P2 P3 P4 t0 C0 t0 + 1 C1 C0 t0 + 2 C1 C0 t0 + 3 C1 C0 t0 + 4 C1 C0 t0 + 5 C1 t0 + 6 Without pipeline: Need 5 time units to finish 1 car With pipeline: Need 6 time units to finish 2 cars 24 / 179 Introduction Principles Core
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages179 Page
-
File Size-