
Performance Optimization for HPC Architectures Shirley V. Moore Philip J. Mucci Innovative Computing Laboratory University of Tennessee [email protected] [email protected] Sameer Shende University of Oregon [email protected] NRL-Monterey Dec 3-4, 2003 Course Outline HPC Architectures Performance Optimization Issues Compiler Optimizations Tuned Numerical Libraries Hand Tuning Communication Performance OpenMP Performance Performance Analysis Tools Performance Results Philip Mucci 2 HPC Architectures Philip Mucci 3 Architecture Evolution ­ Moore's Law: Microprocessor CPU performance doubles every 18 months. ­ Cost and size of storage have fallen along a similar exponential curve. ­ But decrease in time to access storage, called latency, has not kept up, thus leading to ­ deeper and more complex memory hierarchies ­ ªload-storeº architecture Philip Mucci 4 Processor-DRAM Gap (latency) µProc 1000 CPU ªMoore's Lawº 60%/yr. ce 100 Processor-Memory an Performance Gap: rm (grows 50% / year) o f 10 er DRAM P DRAM 7%/yr. 0 1 2 3 4 5 8 9 0 1 2 3 4 7 8 9 0 1 6 7 5 6 8 8 8 8 8 8 8 8 9 9 9 0 8 8 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 Time Philip Mucci 5 Processor Families ­ Have high-level design features in common ­ Four broad families over the past 30 years ­ CISC ­ Vector ­ RISC ­ VLIW Philip Mucci 6 CISC ­ Complex Instruction Set Computer ­ Designed in the 1970s ­ Goal: define a set of assembly instructions so that high- level language constructs could be translated into as few assembly language instructions as possible => many instructions access memory, many instruction types ­ CISC instructions are typically broken down into lower level instructions called microcode. ­ Difficult to pipeline instructions on CISC processors ­ Examples: VAX 11/780, Intel Pentium Pro Philip Mucci 7 Vector Processors ­ Seymour Cray introduced the Cray 1 in 1976. ­ Dominated HPC in the 1980s ­ Perform operations on vectors of data ­ Vector pipelining (called chaining) ­ Examples: Cray T90, Convex C-4, Cray SV1, Cray SX-6, Cray X1, POWER5? Philip Mucci 8 RISC ­ Reduced Instruction Set Computer ­ Designed in the 1980s ­ Goals ­ Decrease the number of clocks per instruction (CPI) ­ Pipeline instructions as much as possible ­ Features ­ No microcode ­ Relatively few instructions all the same length ­ Only load and store instructions access memory ­ Execution of branch delay slots ­ More registers than CISC processors Philip Mucci 9 RISC (cont.) ­ Additional features ­ Branch predicition ­ Superscalar processors · Static scheduling · Dynamic scheduling ­ Out-of-order execution ­ Speculative execution ­ Examples: MIPS R10K/12K/14K, Alpha21264, Sun UltraSparc-3, IBM Power3/Power4 Philip Mucci 10 VLIW ­ Very Long Instruction Word ­ Explicitly designed for instruction level parallelism (ILP) ­ Software determines which instructions can be performed in parallel, bundles this information and the instructions, and passes the bundle to the hardware. ­ Example: Intel-HP Itanium Philip Mucci 11 Architecture Changes in the 1990s ­ 64-bit addresses ­ Optimization of conditional branches via conditional execution (e.g., conditional move) ­ Optimization of cache performance via prefetch ­ Support for multimedia and DSP instructions ­ Faster integer and floating-point operations ­ Reducing branch costs with dynamic hardware prediction Philip Mucci 12 Pipelining ­ Overlapping the execution of multiple instructions ­ Assembly line metaphor ­ Simple pipeline stages ­ Instruction fetch cycle (IF) ­ Instruction decode/register fetch cycle (ID) ­ Execution/effective address cycle (EX) ­ Memory access/branch completion cycle (MEM) ­ Write-back cycle (WB) Philip Mucci 13 Pipeline with Multicycle Operations EX M1 M2 M3 M4 M5 M6 M7 IF ID MEMWB A1 A2 A3 A4 DIV Philip Mucci 14 Pipeline Hazards ­ Situations that prevent the next instruction in the pipeline from executing during its designated clock cycle and thus cause pipeline stalls ­ Types of hazards ­ Structural hazard ± resource conflict when the hardware cannot support all instructions simultaneously ­ Data hazard ± when an instruction depends on the results of a previous instruction ­ Control hazard ± caused by branches and other instructions that change the PC Philip Mucci 15 Memory Hierarchy Design ­ Exploits principle of locality ± programs tend to reuse data and instructions they have used recently ­ Temporal locality ± recently accessed items like to be accessed in the near future ­ Spatial locality ± items whose addresses are near each other likely to accessed close together in time ­ Take advantage of cost-performance of memory technologies ­ Fast memory is more expensive. ­ Goal: Provide a memory system with cost almost as low as the cheapest level of memory and speed almost as fast as the fastest level. Philip Mucci 16 Typical Memory Hierarchy C a c CPU h Memory I/O devices e Register Cache Memory Disk reference reference reference reference Size: 500 bytes 64 KB 512 MB 100 GB Speed: 0.25ns 1 ns 100 ns 5 ms Philip Mucci 17 Memory Technologies ­ Main memory usually built from dynamic random access memory (DRAM) chips. ­ DRAM must be ªrefreshedº ­ Caches usually built from faster but more expensive static random access memory (SRAM) chips. ­ cycle time ± minimum time between requests to memory ­ Cycle time of SRAMs is 8 to 6 times faster than DRAMs, but they are also 8 to 16 times more expensive. Philip Mucci 18 Memory Technologies (cont.) ­ Two times that are important in measuring memory performance: ­ Access time is the time from when a read or write is requested until it arrives at its destination. ­ Cycle time is the minimum time between requests to memory. ­ Since SRAM does not need to be refreshed, it has no different between access time and cycle time. ­ Simple DRAM results in each memory transaction requiring the sum of access time plus cycle time. Philip Mucci 19 Memory Interleaving ­ Multiple banks of memory organized so that sequential words are located in different banks ­ Multiple banks can be accessed simultaneously. ­ Reduces effective cycle time ­ Bank stall or bank contention ± when the memory access pattern is such that the same banks are repeatedly accessed Philip Mucci 20 Cache Characteristics ­ Number of caches ­ Cache sizes ­ Cache line size ­ Associativity ­ Replacement policy ­ Write strategy Philip Mucci 21 Cache Characteristics (cont.) ­ A cache line is the smallest unit of memory that can be transferred to and form main memory. ­ Usually between 32 and 128 bytes ­ In an n-way associative cache, any cache line from memory can map to any of the n locations in a set. ­ 1-way set associative cache is called direct mapped ­ A fully associative cache is one in which a cache line can be placed anywhere in cache. Philip Mucci 22 Cache Hits and Misses ­ When the CPU finds a requested data item in the cache, a cache hit occurs. ­ If the CPU does not find the data item it needs in the cache, a cache miss occurs. ­ Upon a cache miss, a cache line is retrieved from main memory (or a higher level of cache) and placed in the cache. ­ The cache miss rate is the fraction of cache accesses that result in a miss. ­ The time required for a cache miss, called the cache miss penalty, depends on both the latency and bandwidth of the memory. ­ The cycles during which the CPU is stalled waiting for memory access are called memory stall cycles. Philip Mucci 23 Types of Cache Misses Types of cache misses can be classified as follows: 2) compulsory ± the very first access to a cache line 3) capacity ± when the cache cannot contain all the cache lines needed during execution of a program 4) conflict ± In a (less than fully) set associative or direct mapped cache, a conflict miss occurs when a block must be discarded and later retrieved because too many cache lines mapped to the same set. Philip Mucci 24 Multiple Levels of Cache ­ Using multiple levels of cache allows a small fast cache to keep pace with the CPU, while slower larger caches can be used to reduce the miss penalty since the next level cache can be used to capture many accesses that would go to main memory. ­ The local miss rate is large for higher level caches because the first level cache benefits the most from data locality. Thus a global miss rate that indicates what fraction of the memory access that leave the CPU go all the way to memory is a more useful measure. Let us define these terms as follows: ­ local miss rate ± the number of misses in a cache divided by the total number of memory access to this cache ­ global miss rate ± the number of misses in a cache divided by the total number of memory accesses generated by the CPU (Note: for the first level cache, this is the same as the local miss rate) Philip Mucci 25 Nonblocking Caches Pipelined computers that allow out-of-order execution can continue fetching instructions from the instruction cache while waiting on a data cache miss. A non-blocking cache design allows the data cache to continue to supply cache hits during a miss, called ªhit under missº, or ªhit under multiple missº if multiple misses can be overlapped. Hit under miss significantly increases the complexity of the cache controller, with the complexity increasing as the number of outstanding misses allowed increases. Out-of-order processors with hit under miss are generally capable of hiding the miss penalty of an L1 data cache miss that hits in the L2 cache, but are not capable of hiding a significant portion of the L2 miss penalty. Philip Mucci 26 Cache Replacement Policy ­ Possible policies ­ Least Recently Used (LRU) ­ Random ­ Round robin ­ LRU performs better than random or round robin but is more difficult to implement.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages281 Page
-
File Size-