Course #: CSI 440/540 High Perf Sci Comp I Fall ‘09

Total Page:16

File Type:pdf, Size:1020Kb

Course #: CSI 440/540 High Perf Sci Comp I Fall ‘09 High Performance Computing Course #: CSI 440/540 High Perf Sci Comp I Fall ‘09 Mark R. Gilder Email: [email protected] [email protected] CSI 440/540 This course investigates the latest trends in high-performance computing (HPC) evolution and examines key issues in developing algorithms capable of exploiting these architectures. Grading: Your grade in the course will be based on completion of assignments (40%), course project (35%), class presentation(15%), class participation (10%). Course Goals Understanding of the latest trends in HPC architecture evolution, Appreciation for the complexities in efficiently mapping algorithms onto HPC architectures, Familiarity with various program transformations in order to improve performance, Hands-on experience in design and implementation of algorithms for both shared & distributed memory parallel architectures using Pthreads, OpenMP and MPI. Experience in evaluating performance of parallel programs. Mark R. Gilder CSI 440/540 – SUNY Albany Fall '08 2 Grades 40% Homework assignments 35% Final project 15% Class presentations 10% Class participation Mark R. Gilder CSI 440/540 – SUNY Albany Fall '08 3 Homework Usually weekly w/some exceptions Must be turned in on time – no late homework assignments will be accepted All work must be your own - cheating will not be tolerated All references must be sited Assignments may consist of problems, programming, or a combination of both Mark R. Gilder CSI 440/540 – SUNY Albany Fall '08 4 Homework (continued) Detailed discussion of your results is expected – the program is only a small part of the problem Homework assignments will be posted on the class website along with all of the lecture notes ◦ http://www.cs.albany.edu/~gilder Mark R. Gilder CSI 440/540 – SUNY Albany Fall '08 5 Project Topic of general interest to the course Read three or four papers to get some ideas of latest research in HPC field Identify an application that can be implemented on our RIT cluster Implement and write a final report describing your application and results Present your results in class Mark R. Gilder CSI 440/540 – SUNY Albany Fall '08 6 Other Remarks Would like the course to be very interactive Willing to accept suggestions for changes in content and/or form Mark R. Gilder CSI 440/540 – SUNY Albany Fall '08 7 Material Book(s) optional ◦ Introduction to Parallel Computing 2nd Edition by Grama et. Al. ISBN: 0-201-64865-2 ◦ The Sourcebook of Parallel Computing (The Morgan Kaufmann Series) by J. Dongarra, I. Foster, G. Fox et. al. ISBN: 1558608710 ◦ Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers by B. Wilkinson and M. Allen ISBN: 0136717101 Mark R. Gilder CSI 440/540 – SUNY Albany Fall '08 8 Material Lecture notes will be provided online either before or just after class Other reading material may be assigned Course website: http://www.cs.albany.edu/~gilder/ Mark R. Gilder CSI 440/540 – SUNY Albany Fall '08 9 Course Overview Learning about: ◦ High-Performance Computing (HPC) ◦ Parallel Computing ◦ Performance Analysis ◦ Computational Techniques ◦ Tools to aid in parallel programming ◦ Developing programs using MPI, Pthreads, maybe OpenMP, maybe CUDA Mark R. Gilder CSI 440/540 – SUNY Albany Fall '08 10 What You Should Learn In depth understanding of: ◦ When parallel computing is useful ◦ Understanding of parallel computing options ◦ Overview of programming models ◦ Performance analysis and tuning Mark R. Gilder CSI 440/540 – SUNY Albany Fall '08 11 Background Strong C-programming experience Understanding of Operating Systems Some background in Numerical Computing Mark R. Gilder CSI 440/540 – SUNY Albany Fall '08 12 Computer Accounts For most of the class we will be using the RIT computer cluster. See the following link for more info about the hardware: http://www.rit.albany.edu/wiki/IBM_pSeries_Cluster Accounts will be made available by the second week of class Mark R. Gilder CSI 440/540 – SUNY Albany Fall '08 13 Homework #1 Implement a version of the following operations: 1) Matrix-vector multiplication n c j Ai, j * x j , for i 1,..,m j 1 2) Matrix multiplication n Ci, j Ai,k *Bk, j , for i,j 1,...,n k 1 The point of this assignment is not to focus on writing software but rather to look at the performance for each of your implementations and try to explain the observed performance. You should run several experiments on various systems and provide an analysis of your results. This should include plots of your data for various values of n between say 10 and 5000. Make sure you provide a write-up along with your plots and be sure to demonstrate that your implementation is also generating the correct results. Information on various processors may be found at: http://www.cpu-world.com/CPUs/index.html http://www.cpu-world.com/sspec/index.html Mark R. Gilder CSI 440/540 – SUNY Albany Fall '08 14 Lecture 1 Outline: ◦ HPC Introduction ◦ Motivation ◦ General Computing Trends Mark R. Gilder CSI 440/540 – SUNY Albany Fall '08 15 Units of High Performance Computing Term Expanded Actual Performance 1 Kflop/s 1 Kiloflop/s 103 Flop/sec 1 Mflop/s 1 Megaflop/s 106 Flop/sec 1 Gflop/s 1 Gigaflop/s 109 Flop/sec 1 Tflop/s 1 Teraflop/s 1012 Flop/sec 1 Pflop/s 1 Petaflop/s 1015 Flop/sec Data 1 KB 1 Kilobyte 103 Bytes 1 MB 1 Megabyte 106 Bytes 1 GB 1 Gigabyte 109 Bytes 1 TB 1 Terabyte 1012 Bytes 1 PB 1 Petabyte 1015 Bytes Mark R. Gilder CSI 440/540 – SUNY Albany Fall '08 16 HPC Highlights High Performance Computing: Overloaded term but typically refers to the use of computer clusters and/or custom supercomputers to solve large-scale scientific computations. Essentially covers computers which are designed for large computations and/or data intensive tasks. These systems rely on parallel processing to increase algorithm performance (speed-up). Example Applications Include: Computational Fluid Dynamics (CFDs) Large Scale Modeling / Simulation Bioinformatics Molecular Dynamics Financial Mark R. Gilder CSI 440/540 – SUNY Albany Fall '08 17 HPC Highlights HPC Attributes: Multiple processors – 10’s, 100’s, 1000’s. High Speed Interconnect Network, i.e, InfiniBand, GigE, etc. Clusters typically built from COTS / commodity components Supercomputers built from mix of both commodity / custom components Performance typically in the Teraflop range (1012 floating point operations / sec) Mark R. Gilder CSI 440/540 – SUNY Albany Fall '08 18 HPC Data Example Lets say you can print ◦ 5 columns of 100 numbers each; on both sides of the page = 1000 numbers (Kflop) in one second ( 1 Kflop/s) 106 numbers (Mflop) = 1,000 pages (about 10 cm) ◦ 2 reams of paper / second ◦ 1 Mflop/s 109 numbers (Gflop) = 10,000 cm = 100 m stack ◦ Height of Statue of Liberty (printed / second) 1 Gflop/s Mark R. Gilder CSI 440/540 – SUNY Albany Fall '08 19 HPC Data Example Lets say you can print 106 numbers (Mflop) = ◦ 5 columns of 100 numbers each; on 1000 pages (about 10 both sides of the page = 1000 numbers cm) (Kflop) in one second ( 1 Kflop/s) ◦ 2 reams of paper / second ◦ 1 Mflop/s 109 numbers (Gflop) = 10,000 cm = 100 m stack ◦ Height of Statue of Liberty (printed / second) 1 Gflop/s 1012 numbers (Tflop) = 100 km stack; altitude achieved by SpaceShipOne. 1 Tflop/s Mark R. Gilder CSI 440/540 – SUNY Albany Fall '08 20 HPC Data Example Lets say you can print ◦ 5 columns of 100 numbers each; on both sides of the page = 1000 numbers (Kflop) in one second ( 1 Kflop/s) ◦ 106 numbers (1 Mflop) = 1000 pages (about 10 cm); 2 reams paper / sec ◦ 109 numbers (1 Gflop) = 10,000 cm = 100 m stack; Height of Statue of Liberty per sec ◦ 1012 numbers (1 Tflop) = 100 km stack; SpaceShipOne’s distance to space/sec 1015 numbers (1 Pflop) = 100,000 km stack printed per second; 1 Pflop/s ◦ ¼ distance to moon Mark R. Gilder CSI 440/540 – SUNY Albany Fall '08 21 HPC Data Example Lets say you can print ◦ 5 columns of 100 numbers each; on both sides of the page = 1000 numbers (Kflop) in one second ( 1 Kflop/s) ◦ 106 numbers (1 Mflop) = 1000 pages (about 10 cm); 2 reams paper / sec ◦ 109 numbers (1 Gflop) = 10000 cm = 100 m stack; Height of Statue of Liberty per sec ◦ 1012 numbers (1 Tflop) = 100 km stack; SpaceShipOne’s distance to space/sec ◦ 1015 numbers (1 Pflop) = 100,000 km stack printed per sec 1016 numbers (10 Pflop) = 1,000,000 km stack printed per second; 1 Pflop/s ◦ distance to moon and back and then a bit Mark R. Gilder CSI 440/540 – SUNY Albany Fall '08 22 High Performance Computing Today In the past decade, the world has experienced one of the most exciting periods in computer development Microprocessors have become smaller, denser, and more powerful The result is that microprocessor-based supercomputing is rapidly becoming the technology of preference in attacking some of the most important problems of science and engineering Mark R. Gilder CSI 440/540 – SUNY Albany Fall '08 23 Lecture 1 Outline: ◦ HPC Introduction ◦ Motivation ◦ General Computing Trends Mark R. Gilder CSI 440/540 – SUNY Albany Fall '08 24 HPC Motivation Currently Increases in performance accomplished by increases in clock speed Power and heat dissipation limits have clock frequencies stagnating Legacy software investments are at risk Mark R. Gilder CSI 440/540 – SUNY Albany Fall '08 25 HPC Motivation Legacy software is based on a single thread of execution We need to start thinking in parallel Lack of compiler/language tools means more painful and costly software development cycles Mark R. Gilder CSI 440/540 – SUNY Albany Fall '08 26 HPC Motivation Parallel or concurrent designs lead to multicore However, multicore may not be enough for some problems Heterogeneous systems to the rescue: these consist of a collection or processors designed for specific problems which are tied together Heterogeneous Multicore: same thing on a chip Result: Diversity of Computing Architectures w/Limited Tools for Exploiting Capabilities Mark R.
Recommended publications
  • Evolution of Microprocessor Performance
    EvolutionEvolution ofof MicroprocessorMicroprocessor PerformancePerformance So far we examined static & dynamic techniques to improve the performance of single-issue (scalar) pipelined CPU designs including: static & dynamic scheduling, static & dynamic branch predication. Even with these improvements, the restriction of issuing a single instruction per cycle still limits the ideal CPI = 1 Multiple Issue (CPI <1) Multi-cycle Pipelined T = I x CPI x C (single issue) Superscalar/VLIW/SMT Original (2002) Intel Predictions 1 GHz ? 15 GHz to ???? GHz IPC CPI > 10 1.1-10 0.5 - 1.1 .35 - .5 (?) Source: John P. Chen, Intel Labs We next examine the two approaches to achieve a CPI < 1 by issuing multiple instructions per cycle: 4th Edition: Chapter 2.6-2.8 (3rd Edition: Chapter 3.6, 3.7, 4.3 • Superscalar CPUs • Very Long Instruction Word (VLIW) CPUs. Single-issue Processor = Scalar Processor EECC551 - Shaaban Instructions Per Cycle (IPC) = 1/CPI EECC551 - Shaaban #1 lec # 6 Fall 2007 10-2-2007 ParallelismParallelism inin MicroprocessorMicroprocessor VLSIVLSI GenerationsGenerations Bit-level parallelism Instruction-level Thread-level (?) (TLP) 100,000,000 (ILP) Multiple micro-operations Superscalar /VLIW per cycle Simultaneous Single-issue CPI <1 u Multithreading SMT: (multi-cycle non-pipelined) Pipelined e.g. Intel’s Hyper-threading 10,000,000 CPI =1 u uuu u u Chip-Multiprocessors (CMPs) u Not Pipelined R10000 e.g IBM Power 4, 5 CPI >> 1 uuuuuuu u AMD Athlon64 X2 u uuuuu Intel Pentium D u uuuuuuuu u u 1,000,000 u uu uPentium u u uu i80386 u i80286
    [Show full text]
  • Itanium Processor
    IA-64 Microarchitecture --- Itanium Processor By Subin kizhakkeveettil CS B-S5….no:83 INDEX • INTRODUCTION • ARCHITECTURE • MEMORY ARCH: • INSTRUCTION FORMAT • INSTRUCTION EXECUTION • PIPELINE STAGES: • FLOATING POINT PERFORMANCE • INTEGER PERFORMANCE • CONCLUSION • REFERENCES Itanium Processor • First implementation of IA-64 • Compiler based exploitation of ILP • Also has many features of superscalar INTRODUCTION • Itanium is the brand name for 64-bit Intel microprocessors that implement the Intel Itanium architecture (formerly called IA-64). • Itanium's architecture differs dramatically from the x86 architectures (and the x86-64 extensions) used in other Intel processors. • The architecture is based on explicit instruction-level parallelism, in which the compiler makes the decisions about which instructions to execute in parallel Memory architecture • From 2002 to 2006, Itanium 2 processors shared a common cache hierarchy. They had 16 KB of Level 1 instruction cache and 16 KB of Level 1 data cache. The L2 cache was unified (both instruction and data) and is 256 KB. The Level 3 cache was also unified and varied in size from 1.5 MB to 24 MB. The 256 KB L2 cache contains sufficient logic to handle semaphore operations without disturbing the main arithmetic logic unit (ALU). • Main memory is accessed through a bus to an off-chip chipset. The Itanium 2 bus was initially called the McKinley bus, but is now usually referred to as the Itanium bus. The speed of the bus has increased steadily with new processor releases. The bus transfers 2x128 bits per clock cycle, so the 200 MHz McKinley bus transferred 6.4 GB/sand the 533 MHz Montecito bus transfers 17.056 GB/s.
    [Show full text]
  • Intel® Processor Architecture
    Intel® Processor Architecture January 2013 Software & Services Group, Developer Products Division Copyright © 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners. Agenda •Overview Intel® processor architecture •Intel x86 ISA (instruction set architecture) •Micro-architecture of processor core •Uncore structure •Additional processor features – Hyper-threading – Turbo mode •Summary Software & Services Group, Developer Products Division Copyright © 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners. 2 Intel® Processor Segments Today Architecture Target ISA Specific Platforms Features Intel® phone, tablet, x86 up to optimized for ATOM™ netbook, low- SSSE-3, 32 low-power, in- Architecture power server and 64 bit order Intel® mainstream x86 up to flexible Core™ notebook, Intel® AVX, feature set Architecture desktop, server 32 and 64bit covering all needs Intel® high end server IA64, x86 by RAS, large Itanium® emulation address space Architecture Intel® MIC accelerator for x86 and +60 cores, Architecture HPC Intel® MIC optimized for Instruction Floating-Point Set performance Software & Services Group, Developer Products Division Copyright © 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners. 3 Itanium® 9500 (Poulson) New Itanium Processor Poulson Processor • Compatible with Itanium® 9300 processors (Tukwila) Core Core 32MB Core Shared Core • New micro-architecture with 8 Last Core Level Core Cores Cache Core Core • 54 MB on-die cache Memory Link • Improved RAS and power Controllers Controllers management capabilities • Doubles execution width from 6 to 12 instructions/cycle 4 Intel 4 Full + 2 Half Scalable Width Intel • 32nm process technology Memory QuickPath Interface Interconnect • Launched in November 2012 (SMI) Compatibility provides protection for today’s Itanium® investment Software & Services Group, Developer Products Division Copyright © 2013, Intel Corporation.
    [Show full text]
  • Itanium and Vnuma
    Virtualisation Case Study: Itanium and vNUMA Matthew Chapman [email protected] 1 ITANIUM ➜ High-performance processor architecture ➜ Also known as IA-64 ➜ Joint venture between HP and Intel ➜ Not the same instruction set as IA-32/AMD64/EM64T ➜ Very good floating-point performance 2000: Itanium“Merced” 180nm,upto800Mhz,4MBcache 2002: ItaniumII“McKinley” 180nm,upto1Ghz,3MBcache 2003: ItaniumII“Madison” 130nm,upto1.6Ghz,6MBcache 2004: Itanium II “Madison 9M” 130 nm, up to 1.67Ghz, 9MB cache 2006: Itanium II “Montecito” 90 nm, dual-core, up to 1.6Ghz, 24MB cache 2008: “Tukwila” 65nm,quad-core,upto2.5Ghz? ITANIUM 2 EXPLICITLY PARALLEL INSTRUCTION-SET COMPUTING (EPIC) ➜ Goal to increase instructions per cycle ➜ Itanium can have similar performance to x86 at a lower clock speed ➜ Based on Very Long Instruction Word (VLIW) ➜ Explicit parallelism in instruction set ➜ Simplified instruction decode and issue ➜ Scheduling decisions made by compiler Memory Branch Branch Branch Integer Integer Integer Integer MI;I bundle Integer Memory Memory Memory Memory Floating−point Floating−point Floating−point Floating−point VLIW Instruction MFI bundle Integer Execution Units Execution Units EPIC Instructions EXPLICITLY PARALLEL INSTRUCTION-SET COMPUTING (EPIC) 3 EXAMPLE Load and add three numbers in assembly code: ld8 r4 = [r1] ld8 r5 = [r2] ld8 r6 = [r3] ;; add r7 = r4, r5 ;; add r8 = r6, r7 ;; EXAMPLE 4 Resulting instructions: MMI ld8 r4=[r1] ld8 r5=[r2] nop.i 0 M;MI; ld8 r6 = [r3] ;; add r7=r4,r5 // Arithmetic on M too nop.i 0 ;; M;MI add r8 = r6, r7
    [Show full text]
  • Chapter 29 Itanium Architecture
    Chapter 29 Itanium Architecture Chapter 29 - Itanium Architecture July 2003 Chapter 29 / Page 1 Chapter 29 Itanium Architecture INDEX Introduction 3 Overview of the Itanium Architecture 3 Main features of EPIC....................................................................................................... 3 The Itanium processor family (IPF) 5 HP-UX Itanium Releases 6 Differences between HP-UX for PA and IPF ................................................................... 6 What needs to be considered on an IPF system 8 How an IPF system boots.................................................................................................. 8 Boot disk layout and EFI................................................................................................... 8 How to configure the Console input/output devices....................................................... 10 The boot process ............................................................................................................. 13 How to mirror the root disk............................................................................................. 18 How to replace a failed disk............................................................................................ 21 The ARIES emulator 22 IPF vs. PA-RISC Terminology 22 Additional Information 23 July 2003 Chapter 29 / Page 2 Chapter 29 Itanium Architecture Introduction When PA-RISC was released, HP began designing the architecture to replace it. Several years into the project, HP determined that the economics
    [Show full text]
  • Madison Processor
    TheThe RoadRoad toto BillionBillion TransistorTransistor ProcessorProcessor ChipsChips inin thethe NearNear FutureFuture DileepDileep BhandarkarBhandarkar RogerRoger GolliverGolliver Intel Corporation April 22th, 2003 Copyright © 2002 Intel Corporation. Outline yy SemiconductorSemiconductor TechnologyTechnology EvolutionEvolution yy Moore’sMoore’s LawLaw VideoVideo yy ParallelismParallelism inin MicroprocessorsMicroprocessors TodayToday yy MultiprocessorMultiprocessor SystemsSystems yy TheThe PathPath toto BillionBillion TransistorsTransistors yy SummarySummary ©2002, Intel Corporation Intel, the Intel logo, Pentium, Itanium and Xeon are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries *Other names and brands may be claimed as the property of others BirthBirth ofof thethe RevolutionRevolution ---- TheThe IntelIntel 40044004 IntroducedIntroduced NovemberNovember 15,15, 19711971 108108 KHz,KHz, 5050 KIPsKIPs ,, 23002300 1010μμ transistorstransistors 20012001 –– Pentium®Pentium® 44 ProcessorProcessor Introduced November 20, 2000 @1.5 GHz core, 400 MT/s bus 42 Million 0.18µ transistors August 27, 2001 @2 GHz, 400 MT/s bus 640 SPECint_base2000* 704 SPECfp_base2000* SourceSource:: hhtttptp:/://www/www.specbench.org/cpu2000/results/.specbench.org/cpu2000/results/ 3030 YearsYears ofof ProgressProgress yy40044004 toto PentiumPentium®® 44 processorprocessor –– TransistorTransistor count:count: 20,000x20,000x increaseincrease –– Frequency:Frequency: 20,000x20,000x increaseincrease
    [Show full text]
  • United States Securities and Exchange Commission Form
    UNITED STATES SECURITIES AND EXCHANGE COMMISSION Washington, D.C. 20549 FORM 8-K CURRENT REPORT Pursuant to Section 13 or 15(d) of the Securities Exchange Act of 1934 Date of Report: April 15, 2003 (Date of earliest event reported) INTEL CORPORATION (Exact name of registrant as specified in its charter) Delaware 0-6217 94-1672743 (State of (Commission (IRS Employer incorporation) File Number) Identification No.) 2200 Mission College Blvd., Santa Clara, California 95052-8119 (Address of principal executive offices) (Zip Code) (408) 765-8080 (Registrant's telephone number, including area code) Item 12. DISCLOSURE OF RESULTS OF OPERATIONS AND FINANCIAL CONDITION 12.1 Attached hereto as Exhibit 99.1 and incorporated by reference herein is financial information for Intel Corporation for the quarter ended March 29, 2003 and forward-looking statements relating to 2003 and the second quarter of 2003 as presented in a press release of April 15, 2003. The information in this report shall be deemed incorporated by reference into any registration statement heretofore or hereafter filed under the Securities Act of 1933, as amended, except to the extent that such information is superceded by information as of a subsequent date that is included in or incorporated by reference into such registration statement. The information in this report shall not be treated as filed for purposes of the Securities Exchange Act of 1934, as amended. Item 7. FINANCIAL STATEMENTS AND EXHIBITS (c) Exhibits 99.1 Financial information for Intel Corporation for the quarter and the year ended March 29, 2003 and forward-looking statements relating to 2003 and the second quarter of 2003 as presented in a press release of April 15, 2003.
    [Show full text]
  • Intel's Core 2 Family
    Intel’s Core 2 family - TOCK lines II Nehalem to Haswell Dezső Sima Vers. 3.11 August 2018 Contents • 1. Introduction • 2. The Core 2 line • 3. The Nehalem line • 4. The Sandy Bridge line • 5. The Haswell line • 6. The Skylake line • 7. The Kaby Lake line • 8. The Kaby Lake Refresh line • 9. The Coffee Lake line • 10. The Cannon Lake line 3. The Nehalem line 3.1 Introduction to the 1. generation Nehalem line • (Bloomfield) • 3.2 Major innovations of the 1. gen. Nehalem line 3.3 Major innovations of the 2. gen. Nehalem line • (Lynnfield) 3.1 Introduction to the 1. generation Nehalem line (Bloomfield) 3.1 Introduction to the 1. generation Nehalem line (Bloomfield) (1) 3.1 Introduction to the 1. generation Nehalem line (Bloomfield) Developed at Hillsboro, Oregon, at the site where the Pentium 4 was designed. Experiences with HT Nehalem became a multithreaded design. The design effort took about five years and required thousands of engineers (Ronak Singhal, lead architect of Nehalem) [37]. The 1. gen. Nehalem line targets DP servers, yet its first implementation appeared in the desktop segment (Core i7-9xx (Bloomfield)) 4C in 11/2008 1. gen. 2. gen. 3. gen. 4. gen. 5. gen. West- Core 2 Penryn Nehalem Sandy Ivy Haswell Broad- mere Bridge Bridge well New New New New New New New New Microarch. Process Microarchi. Microarch. Process Microarch. Process Process 45 nm 65 nm 45 nm 32 nm 32 nm 22 nm 22 nm 14 nm TOCK TICK TOCK TICK TOCK TICK TOCK TICK (2006) (2007) (2008) (2010) (2011) (2012) (2013) (2014) Figure : Intel’s Tick-Tock development model (Based on [1]) * 3.1 Introduction to the 1.
    [Show full text]
  • Parallel Computing 2Nd Edition by Grama Et
    High Performance Computing Course #: CSI 441/541 High Perf Sci Comp II Spring ‘10 Mark R. Gilder Email: [email protected] [email protected] CSI 441/541 This course investigates the latest trends in high-performance computing (HPC) evolution and examines key issues in developing algorithms capable of exploiting these architectures. Grading: Your grade in the course will be based on completion of assignments (40%), course project (35%), class presentation(15%), class participation (10%). Course Goals Understanding of the latest trends in HPC architecture evolution, Appreciation for the complexities in efficiently mapping algorithms onto HPC architectures, Hands-on experience in design and implementation of algorithms for both shared & distributed memory parallel architectures using MPI, Experience in evaluating performance of parallel programs. Mark R. Gilder CSI 441/541 – SUNY Albany Spring '10 2 Grades 40% Homework assignments 35% Final project 15% Class presentations 10% Class participation Mark R. Gilder CSI 441/541 – SUNY Albany Spring '10 3 Homework Usually weekly w/some exceptions Must be turned in on time – no late homework assignments will be accepted All work must be your own - cheating will not be tolerated All references must be sited Assignments may consist of problems, programming, or a combination of both Mark R. Gilder CSI 441/541 – SUNY Albany Spring '10 4 Homework (continued) Detailed discussion of your results is expected ◦ the actual program is only a small part of the problem Homework assignments will be posted on the class website along with all of the lecture notes ◦ http://www.cs.albany.edu/~gilder Mark R. Gilder CSI 441/541 – SUNY Albany Spring '10 5 Project Topic of general interest to the course Read three or four papers to get some ideas of latest research in HPC field Identify an application that can be implemented on our RIT cluster Implement and write a final report describing your application and results Present your results in class Mark R.
    [Show full text]
  • Dual-Core Intel® Itanium® 2 Processor: Reference Manual Update
    Dual-Core Update to the Intel® Itanium® 2 Processor Reference Manual For Software Development and Optimization Revision 0.9 January 2006 Document Number: 308065-001 Notice: This document contains information on products in the design phase of development. The information here is subject to change without notice. Do not finalize a design with this information. INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL® PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. Intel products are not intended for use in medical, life saving, or life sustaining applications. Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked “reserved” or “undefined.” Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The Itanium 2 processor may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. The code name “Montecito” presented in this document is only for use by Intel to identify a product, technology, or service in development, that has not been made commercially available to the public, i.e., announced, launched or shipped.
    [Show full text]
  • An Outlook of Technology Scaling Beyond Moore's Law
    An Outlook of Technology Scaling Beyond Moore's Law Adrian M. Ionescu Ecole Polytechnique Fédérale de Lausanne Adrian Ionescu, October 2005 1 Summary • Introduction: Moore’s law… • More Moore, Beyond CMOS, More-than-More • Fundamental limits • Evolutionary and non-classical MOSFET (More Moore…) • Emerging nanoelectronics (… beyond CMOS) Single/few electron electronics & QCA Nanowires & carbon nanotubes Molecular electronics Spintronics • Conclusion Adrian Ionescu, October 2005 2 40 years of Moore’s law: 1965 - 2005 • 1965: a single transistor cost more than a dollar • 1975: the cost of a transistor had dropped to less than a penny, while transistor size allowed for almost 100,000 transistors on a single die • 1979 to 1999, processor performance went from about 1.5 million instructions per second (MIPS), to almost 50 MIPS on the i486™, to over 1,000 MIPS on the Intel® Pentium® III • Today's Intel® processors run at 3.2 GHz and higher, deliver over 10,000 MIPS, and can be manufactured in high volumes with transistors that cost less than 1/10,000th of a cent Adrian Ionescu, October 2005 3 How it started… Adrian Ionescu, October 2005 4 … and where we are… Source: http://www.intel.com/research/silicon/mooreslaw.htm Adrian Ionescu, October 2005 5 90 nm Intel’s processor First planar integrated Montecito (2004) circuit (1961) Itanium Processor Family Transistors: 1.72 Billion Frequency: >1.7GHz Power: ~100W Source: Intel Developer Forum, September, 2004 Adrian Ionescu, October 2005 6 Source: M. Bohr, Intel Development Forum, September 2004.
    [Show full text]
  • Intel Multi-Core Presentation
    MultiMulti--CoreCore MicroprocessorMicroprocessor Chips:Chips: MotivationMotivation && ChallengesChallenges DileepDileep Bhandarkar,Bhandarkar, Ph.Ph. D.D. Architect at Large Digital Enterprise Group Intel Corporation May 2006 Copyright © 2006 Intel Corporation. 2006 Intel Distinguished Lecture Agenda yy SemiconductorSemiconductor TechnologyTechnology EvolutionEvolution yy DesignDesign ChallengesChallenges yy WhyWhy MultiMulti--CoreCore ProcessorProcessor Chips?Chips? yy Power/PerformancePower/Performance TradeTrade--OffsOffs yy CMPCMP DirectionsDirections yy BeyondBeyond CMPCMP yy SummarySummary ©2006, Intel Corporation Intel, the Intel logo, Pentium, Itanium and Xeon are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries *Other names and brands may be claimed as the property of others www.intel.com/education 2006 Intel Distinguished Lecture IntelIntel only:only: OnOn--timetime ““22--yearyear--cyclecycle”” 180nm 130nm 90nm 65nm 45nm Wafer Size (mm): 200 200/300 300 300 300 1st Production: 1999 2001 2003 2005 2007 Transistors: SiGe SiGe Interconnects: 100nm LG 70nm LG 50nm LG 35nm LG Details CoSi2 CoSi2 NiSi NiSi Coming! Strain Si Strain Si 6 Al 6 Cu 7 Cu 8 Cu SiOF SiOF Low-k Low-k 4545 nmnm LogicLogic ProcessProcess onon TrackTrack forfor DeliveryDelivery inin 20072007 Process Name P1262 P1264 P1266 P1268 Lithography 90 nm 65 nm 45 nm 32 nm 1st Production 2003 2005 2007 2009 Moore'sMoore's LawLaw continues!continues! IntelIntel continuescontinues toto developdevelop aa
    [Show full text]