Non-Uniform Power Access in Large Caches with Low-Swing Wires

Total Page:16

File Type:pdf, Size:1020Kb

Non-Uniform Power Access in Large Caches with Low-Swing Wires Non-Uniform Power Access in Large Caches with Low-Swing Wires Aniruddha N. Udipi Naveen Muralimanohar Rajeev Balasubramonian University of Utah HP Labs University of Utah [email protected] [email protected] [email protected] Abstract The CACTI 6.0 tool computes the characteristics of Modern processors dedicate more than half their large NUCA caches and shows that optimal behavior chip area to large L2 and L3 caches and these is exhibited when the cache is partitioned into large caches contribute significantly to the total processor banks, where each bank can accommodate a few mega- power. A large cache is typically split into multiple bytes of data [4]. About 50% of the NUCA cache’s banks and these banks are either connected through dynamic power is dissipated within these large banks a bus (uniform cache access – UCA) or an on- (the rest is mostly within the inter-bank network). chip network (non-uniform cache access – NUCA). Processors that do not employ NUCA will implement Irrespective of the cache model (NUCA or UCA), the monolithic multi-megabyte private or shared L2s or complex interconnects that must be navigated within L3s (the Nehalem and Montecito serving as examples). large caches are found to be the dominant part of In addition to consuming silicon area, these caches cache access power. While there have been a number will also contribute significantly to the energy con- of proposals to minimize energy consumption in the sumed by the processor. Large last-level shared caches inter-bank network, very little attention has been paid often serve as the cache coherence interface on multi- to the optimization of intra-bank network power that core processors. Multi-threaded applications will likely contributes more than 50% of the total cache dynamic make frequent expensive look-ups into the L2 to access power in large cache banks. In this work we study shared data. There already appears to be a trend to sim- various mechanisms that introduce low-swing wires plify the design and power efficiency of cores. Intel’s inside cache banks as energy saving measures. We pro- shift from the Netburst to Core microarchitecture is a pose a novel non-uniform power access design, which sign of things to come. Sun’s Niagara and Rock proces- when coupled with simple architectural mechanisms, sors are also designed for low energy-per-instruction provides the best power-performance tradeoff. The among other things. In the meantime, if SRAM cache proposed mechanisms reduce cache bank energy by arrays remain stagnant in design, their contribution to 42% while incurring a minor 1% drop in performance. overall chip power will continue to grow. Hence, this paper attempts to provide circuit/architecture innova- 1. Introduction tions to improve energy dissipation within large cache To alleviate the growing gap between processors and banks. main memory, contemporary processors have begun We show that energy dissipation in a large cache to provide large multi-megabyte last level caches, is dominated by the H-tree network within each bank. often occupying upwards of half the total die area. To address this bottleneck, we propose various designs For example, the Montecito processor from Intel has that leverage low-swing wiring within the cache. Low- 24 MB of L3 cache [1]. Intel’s consumer desktop swing wires are an attractive choice from the power Nehalem processors have 8 MB of L3 cache [2]. With perspective but are inherently slow. Pipelining them is the memory wall showing no signs of breaking down, inefficient and requires additional transceivers at inter- these trends are likely to continue, with future caches mediate points along the bus. If employed judiciously, only growing larger. however, their performance penalty can be mitigated Large caches are likely to use a Non Uniform Cache while exploiting their low-power characteristics. We Access (NUCA) architecture, with the cache split into discuss these trade-offs in detail for a variety of multiple banks connected by an on-chip network [3]. designs. We finally show that limited use of low-swing wiring provides the best balance between performance This work was supported in parts by NSF grants CCF-0430063, CCF-0811249, CCF-0916436, NSF CAREER award CCF-0545959, and power. This leads us to introduce the notion of Intel, SRC grant 1847.001, and the University of Utah. non-uniform power access, with certain regions of the cache being accessible at low energy with low-swing orthogonal to our optimizations, and focus on dynamic wires. Architectural mechanisms are required to exploit power for the rest of the paper. the low-power region of a cache bank and we explore We now describe briefly the factors that influence novel block placement policies to maximize use of the the organization, power, and delay of a cache bank. A low-power region. Our results show significant cache naive cache takes the form of a single array of mem- energy savings at a very modest performance penalty, ory cells and employs centralized decoder and logic the penalty primarily arising from the non-pipelined circuitry to store and retrieve data from cells. Such a nature of our low-swing transfers. monolithic model, however, has serious scalability is- The paper is organized as follows. Section 2 pro- sues. First, wordlines (due to a lack of available silicon vides basics on cache bank organization. Section 3 dis- area) and bitlines (due to differential signaling) cannot cusses different novel designs that employ low-swing be repeated at regular intervals, causing their delay to wiring within a cache bank. Architectural innovations increase quadratically with the size of the cache. Sec- that support non-uniform power access are described in ond, the bandwidth of the cache is a function of cycle Section 4. Section 5 shows results. We discuss related time1 and a single array cache’s bandwidth deteriorates work in Section 6 and draw conclusions in Section 7. quickly as the array size grows. The performance of the cache is thus limited by long bitlines and wordlines that 2. Background span the array. To reduce this quadratic delay impact, the cache is divided into multiple segments referred to Large caches of the future are expected to experience as subarrays. These subarrays need to be connected increasing disparity between access delays to different through an interconnect fabric to transfer addresses parts of the cache depending on their proximity to the and data within the cache. In order to reduce design processor core or cache controller. This mandates a complexity, an interconnect with easily predictable Non-Uniform Cache Access (NUCA) architecture [3], timing characteristics is essential. A balanced H-tree with large caches being divided into multiple banks network (Figure 1) provides uniform pipelined access connected by an on-chip network for data and address without complex switching circuits and proves to be an transfer between banks. A recent study [5] has shown ideal choice. The number of subarrays that the bank that due to high power and latency overheads associ- is split into, the height/width of the grid of subarrays ated with on-chip network routers, NUCA caches will (the aspect ratio of the bank) and the aspect ratio of the implement a few banks, each of non-trivial size. The subarrays themselves are defined by three fundamental bank count/size of each bank is determined by the parameters of cache bank design: relative contributions of the banks and the network • NDWL - Number of vertical partitions in the array to the total delay and energy consumption of the i.e., the number of segments that a single wordline cache, and associated design constraints. According to is partitioned into. This determines the number of CACTI 6.0 [4], a NUCA modeling tool that identifies columns of subarrays. an optimal trade-off point between bank and network • NDBL - Number of horizontal partitions in the components, a 64 MB NUCA cache will likely be array i.e., the number of segments that a single partitioned into large 2 MB or 4 MB banks. Up to bitline is partitioned into. This determines the 50% of cache dynamic power is dissipated within number of rows of subarrays. these large banks. Some processors may also adopt • NSPD - Number of sets stored in each row of a a tiled architecture where every core is associated subarray. For given Ndwl and Ndbl values, this with a large L2 bank (either a private cache or a decides the aspect ratio of the subarray. slice of a large shared cache) [1]. Thus, regardless of whether future processors adopt private/shared caches, An example organization is illustrated in Figure 1. The or UCA/NUCA architectures, or tiled/contiguous L2 design space of bank design is defined by variations of caches, it is evident that several large cache banks these parameters and the resultant complex interaction will be found on chip. This work focuses on reducing of various internal power and delay components. the significant component of dynamic power within For example, a small subarray count would enable these large cache banks. As per estimates from CACTI tighter packing of cells leading to increased area 5.3 [6], leakage in a 4 MB cache contributes about efficiency (cell area/total area), but would result in 20% to total power consumption. However, there are increased delay due to longer wordlines and bitlines. A well studied circuit ([7], [8], [9]) and microarchi- large subarray count would give better delay character- tectural ([10], [11], [12], [13]) techniques in current istics but result in increased silicon area consumption. literature to tackle leakage in caches. We assume that 1. The cycle time of a cache is the sum of the wordline, bitline, several of these can continue to be applied to the cache and senseamp delays.
Recommended publications
  • Evolution of Microprocessor Performance
    EvolutionEvolution ofof MicroprocessorMicroprocessor PerformancePerformance So far we examined static & dynamic techniques to improve the performance of single-issue (scalar) pipelined CPU designs including: static & dynamic scheduling, static & dynamic branch predication. Even with these improvements, the restriction of issuing a single instruction per cycle still limits the ideal CPI = 1 Multiple Issue (CPI <1) Multi-cycle Pipelined T = I x CPI x C (single issue) Superscalar/VLIW/SMT Original (2002) Intel Predictions 1 GHz ? 15 GHz to ???? GHz IPC CPI > 10 1.1-10 0.5 - 1.1 .35 - .5 (?) Source: John P. Chen, Intel Labs We next examine the two approaches to achieve a CPI < 1 by issuing multiple instructions per cycle: 4th Edition: Chapter 2.6-2.8 (3rd Edition: Chapter 3.6, 3.7, 4.3 • Superscalar CPUs • Very Long Instruction Word (VLIW) CPUs. Single-issue Processor = Scalar Processor EECC551 - Shaaban Instructions Per Cycle (IPC) = 1/CPI EECC551 - Shaaban #1 lec # 6 Fall 2007 10-2-2007 ParallelismParallelism inin MicroprocessorMicroprocessor VLSIVLSI GenerationsGenerations Bit-level parallelism Instruction-level Thread-level (?) (TLP) 100,000,000 (ILP) Multiple micro-operations Superscalar /VLIW per cycle Simultaneous Single-issue CPI <1 u Multithreading SMT: (multi-cycle non-pipelined) Pipelined e.g. Intel’s Hyper-threading 10,000,000 CPI =1 u uuu u u Chip-Multiprocessors (CMPs) u Not Pipelined R10000 e.g IBM Power 4, 5 CPI >> 1 uuuuuuu u AMD Athlon64 X2 u uuuuu Intel Pentium D u uuuuuuuu u u 1,000,000 u uu uPentium u u uu i80386 u i80286
    [Show full text]
  • Course #: CSI 440/540 High Perf Sci Comp I Fall ‘09
    High Performance Computing Course #: CSI 440/540 High Perf Sci Comp I Fall ‘09 Mark R. Gilder Email: [email protected] [email protected] CSI 440/540 This course investigates the latest trends in high-performance computing (HPC) evolution and examines key issues in developing algorithms capable of exploiting these architectures. Grading: Your grade in the course will be based on completion of assignments (40%), course project (35%), class presentation(15%), class participation (10%). Course Goals Understanding of the latest trends in HPC architecture evolution, Appreciation for the complexities in efficiently mapping algorithms onto HPC architectures, Familiarity with various program transformations in order to improve performance, Hands-on experience in design and implementation of algorithms for both shared & distributed memory parallel architectures using Pthreads, OpenMP and MPI. Experience in evaluating performance of parallel programs. Mark R. Gilder CSI 440/540 – SUNY Albany Fall '08 2 Grades 40% Homework assignments 35% Final project 15% Class presentations 10% Class participation Mark R. Gilder CSI 440/540 – SUNY Albany Fall '08 3 Homework Usually weekly w/some exceptions Must be turned in on time – no late homework assignments will be accepted All work must be your own - cheating will not be tolerated All references must be sited Assignments may consist of problems, programming, or a combination of both Mark R. Gilder CSI 440/540 – SUNY Albany Fall '08 4 Homework (continued) Detailed discussion of your results is expected – the program is only a small part of the problem Homework assignments will be posted on the class website along with all of the lecture notes ◦ http://www.cs.albany.edu/~gilder Mark R.
    [Show full text]
  • Itanium Processor
    IA-64 Microarchitecture --- Itanium Processor By Subin kizhakkeveettil CS B-S5….no:83 INDEX • INTRODUCTION • ARCHITECTURE • MEMORY ARCH: • INSTRUCTION FORMAT • INSTRUCTION EXECUTION • PIPELINE STAGES: • FLOATING POINT PERFORMANCE • INTEGER PERFORMANCE • CONCLUSION • REFERENCES Itanium Processor • First implementation of IA-64 • Compiler based exploitation of ILP • Also has many features of superscalar INTRODUCTION • Itanium is the brand name for 64-bit Intel microprocessors that implement the Intel Itanium architecture (formerly called IA-64). • Itanium's architecture differs dramatically from the x86 architectures (and the x86-64 extensions) used in other Intel processors. • The architecture is based on explicit instruction-level parallelism, in which the compiler makes the decisions about which instructions to execute in parallel Memory architecture • From 2002 to 2006, Itanium 2 processors shared a common cache hierarchy. They had 16 KB of Level 1 instruction cache and 16 KB of Level 1 data cache. The L2 cache was unified (both instruction and data) and is 256 KB. The Level 3 cache was also unified and varied in size from 1.5 MB to 24 MB. The 256 KB L2 cache contains sufficient logic to handle semaphore operations without disturbing the main arithmetic logic unit (ALU). • Main memory is accessed through a bus to an off-chip chipset. The Itanium 2 bus was initially called the McKinley bus, but is now usually referred to as the Itanium bus. The speed of the bus has increased steadily with new processor releases. The bus transfers 2x128 bits per clock cycle, so the 200 MHz McKinley bus transferred 6.4 GB/sand the 533 MHz Montecito bus transfers 17.056 GB/s.
    [Show full text]
  • Intel® Processor Architecture
    Intel® Processor Architecture January 2013 Software & Services Group, Developer Products Division Copyright © 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners. Agenda •Overview Intel® processor architecture •Intel x86 ISA (instruction set architecture) •Micro-architecture of processor core •Uncore structure •Additional processor features – Hyper-threading – Turbo mode •Summary Software & Services Group, Developer Products Division Copyright © 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners. 2 Intel® Processor Segments Today Architecture Target ISA Specific Platforms Features Intel® phone, tablet, x86 up to optimized for ATOM™ netbook, low- SSSE-3, 32 low-power, in- Architecture power server and 64 bit order Intel® mainstream x86 up to flexible Core™ notebook, Intel® AVX, feature set Architecture desktop, server 32 and 64bit covering all needs Intel® high end server IA64, x86 by RAS, large Itanium® emulation address space Architecture Intel® MIC accelerator for x86 and +60 cores, Architecture HPC Intel® MIC optimized for Instruction Floating-Point Set performance Software & Services Group, Developer Products Division Copyright © 2013, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners. 3 Itanium® 9500 (Poulson) New Itanium Processor Poulson Processor • Compatible with Itanium® 9300 processors (Tukwila) Core Core 32MB Core Shared Core • New micro-architecture with 8 Last Core Level Core Cores Cache Core Core • 54 MB on-die cache Memory Link • Improved RAS and power Controllers Controllers management capabilities • Doubles execution width from 6 to 12 instructions/cycle 4 Intel 4 Full + 2 Half Scalable Width Intel • 32nm process technology Memory QuickPath Interface Interconnect • Launched in November 2012 (SMI) Compatibility provides protection for today’s Itanium® investment Software & Services Group, Developer Products Division Copyright © 2013, Intel Corporation.
    [Show full text]
  • Itanium and Vnuma
    Virtualisation Case Study: Itanium and vNUMA Matthew Chapman [email protected] 1 ITANIUM ➜ High-performance processor architecture ➜ Also known as IA-64 ➜ Joint venture between HP and Intel ➜ Not the same instruction set as IA-32/AMD64/EM64T ➜ Very good floating-point performance 2000: Itanium“Merced” 180nm,upto800Mhz,4MBcache 2002: ItaniumII“McKinley” 180nm,upto1Ghz,3MBcache 2003: ItaniumII“Madison” 130nm,upto1.6Ghz,6MBcache 2004: Itanium II “Madison 9M” 130 nm, up to 1.67Ghz, 9MB cache 2006: Itanium II “Montecito” 90 nm, dual-core, up to 1.6Ghz, 24MB cache 2008: “Tukwila” 65nm,quad-core,upto2.5Ghz? ITANIUM 2 EXPLICITLY PARALLEL INSTRUCTION-SET COMPUTING (EPIC) ➜ Goal to increase instructions per cycle ➜ Itanium can have similar performance to x86 at a lower clock speed ➜ Based on Very Long Instruction Word (VLIW) ➜ Explicit parallelism in instruction set ➜ Simplified instruction decode and issue ➜ Scheduling decisions made by compiler Memory Branch Branch Branch Integer Integer Integer Integer MI;I bundle Integer Memory Memory Memory Memory Floating−point Floating−point Floating−point Floating−point VLIW Instruction MFI bundle Integer Execution Units Execution Units EPIC Instructions EXPLICITLY PARALLEL INSTRUCTION-SET COMPUTING (EPIC) 3 EXAMPLE Load and add three numbers in assembly code: ld8 r4 = [r1] ld8 r5 = [r2] ld8 r6 = [r3] ;; add r7 = r4, r5 ;; add r8 = r6, r7 ;; EXAMPLE 4 Resulting instructions: MMI ld8 r4=[r1] ld8 r5=[r2] nop.i 0 M;MI; ld8 r6 = [r3] ;; add r7=r4,r5 // Arithmetic on M too nop.i 0 ;; M;MI add r8 = r6, r7
    [Show full text]
  • Chapter 29 Itanium Architecture
    Chapter 29 Itanium Architecture Chapter 29 - Itanium Architecture July 2003 Chapter 29 / Page 1 Chapter 29 Itanium Architecture INDEX Introduction 3 Overview of the Itanium Architecture 3 Main features of EPIC....................................................................................................... 3 The Itanium processor family (IPF) 5 HP-UX Itanium Releases 6 Differences between HP-UX for PA and IPF ................................................................... 6 What needs to be considered on an IPF system 8 How an IPF system boots.................................................................................................. 8 Boot disk layout and EFI................................................................................................... 8 How to configure the Console input/output devices....................................................... 10 The boot process ............................................................................................................. 13 How to mirror the root disk............................................................................................. 18 How to replace a failed disk............................................................................................ 21 The ARIES emulator 22 IPF vs. PA-RISC Terminology 22 Additional Information 23 July 2003 Chapter 29 / Page 2 Chapter 29 Itanium Architecture Introduction When PA-RISC was released, HP began designing the architecture to replace it. Several years into the project, HP determined that the economics
    [Show full text]
  • Madison Processor
    TheThe RoadRoad toto BillionBillion TransistorTransistor ProcessorProcessor ChipsChips inin thethe NearNear FutureFuture DileepDileep BhandarkarBhandarkar RogerRoger GolliverGolliver Intel Corporation April 22th, 2003 Copyright © 2002 Intel Corporation. Outline yy SemiconductorSemiconductor TechnologyTechnology EvolutionEvolution yy Moore’sMoore’s LawLaw VideoVideo yy ParallelismParallelism inin MicroprocessorsMicroprocessors TodayToday yy MultiprocessorMultiprocessor SystemsSystems yy TheThe PathPath toto BillionBillion TransistorsTransistors yy SummarySummary ©2002, Intel Corporation Intel, the Intel logo, Pentium, Itanium and Xeon are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries *Other names and brands may be claimed as the property of others BirthBirth ofof thethe RevolutionRevolution ---- TheThe IntelIntel 40044004 IntroducedIntroduced NovemberNovember 15,15, 19711971 108108 KHz,KHz, 5050 KIPsKIPs ,, 23002300 1010μμ transistorstransistors 20012001 –– Pentium®Pentium® 44 ProcessorProcessor Introduced November 20, 2000 @1.5 GHz core, 400 MT/s bus 42 Million 0.18µ transistors August 27, 2001 @2 GHz, 400 MT/s bus 640 SPECint_base2000* 704 SPECfp_base2000* SourceSource:: hhtttptp:/://www/www.specbench.org/cpu2000/results/.specbench.org/cpu2000/results/ 3030 YearsYears ofof ProgressProgress yy40044004 toto PentiumPentium®® 44 processorprocessor –– TransistorTransistor count:count: 20,000x20,000x increaseincrease –– Frequency:Frequency: 20,000x20,000x increaseincrease
    [Show full text]
  • United States Securities and Exchange Commission Form
    UNITED STATES SECURITIES AND EXCHANGE COMMISSION Washington, D.C. 20549 FORM 8-K CURRENT REPORT Pursuant to Section 13 or 15(d) of the Securities Exchange Act of 1934 Date of Report: April 15, 2003 (Date of earliest event reported) INTEL CORPORATION (Exact name of registrant as specified in its charter) Delaware 0-6217 94-1672743 (State of (Commission (IRS Employer incorporation) File Number) Identification No.) 2200 Mission College Blvd., Santa Clara, California 95052-8119 (Address of principal executive offices) (Zip Code) (408) 765-8080 (Registrant's telephone number, including area code) Item 12. DISCLOSURE OF RESULTS OF OPERATIONS AND FINANCIAL CONDITION 12.1 Attached hereto as Exhibit 99.1 and incorporated by reference herein is financial information for Intel Corporation for the quarter ended March 29, 2003 and forward-looking statements relating to 2003 and the second quarter of 2003 as presented in a press release of April 15, 2003. The information in this report shall be deemed incorporated by reference into any registration statement heretofore or hereafter filed under the Securities Act of 1933, as amended, except to the extent that such information is superceded by information as of a subsequent date that is included in or incorporated by reference into such registration statement. The information in this report shall not be treated as filed for purposes of the Securities Exchange Act of 1934, as amended. Item 7. FINANCIAL STATEMENTS AND EXHIBITS (c) Exhibits 99.1 Financial information for Intel Corporation for the quarter and the year ended March 29, 2003 and forward-looking statements relating to 2003 and the second quarter of 2003 as presented in a press release of April 15, 2003.
    [Show full text]
  • Intel's Core 2 Family
    Intel’s Core 2 family - TOCK lines II Nehalem to Haswell Dezső Sima Vers. 3.11 August 2018 Contents • 1. Introduction • 2. The Core 2 line • 3. The Nehalem line • 4. The Sandy Bridge line • 5. The Haswell line • 6. The Skylake line • 7. The Kaby Lake line • 8. The Kaby Lake Refresh line • 9. The Coffee Lake line • 10. The Cannon Lake line 3. The Nehalem line 3.1 Introduction to the 1. generation Nehalem line • (Bloomfield) • 3.2 Major innovations of the 1. gen. Nehalem line 3.3 Major innovations of the 2. gen. Nehalem line • (Lynnfield) 3.1 Introduction to the 1. generation Nehalem line (Bloomfield) 3.1 Introduction to the 1. generation Nehalem line (Bloomfield) (1) 3.1 Introduction to the 1. generation Nehalem line (Bloomfield) Developed at Hillsboro, Oregon, at the site where the Pentium 4 was designed. Experiences with HT Nehalem became a multithreaded design. The design effort took about five years and required thousands of engineers (Ronak Singhal, lead architect of Nehalem) [37]. The 1. gen. Nehalem line targets DP servers, yet its first implementation appeared in the desktop segment (Core i7-9xx (Bloomfield)) 4C in 11/2008 1. gen. 2. gen. 3. gen. 4. gen. 5. gen. West- Core 2 Penryn Nehalem Sandy Ivy Haswell Broad- mere Bridge Bridge well New New New New New New New New Microarch. Process Microarchi. Microarch. Process Microarch. Process Process 45 nm 65 nm 45 nm 32 nm 32 nm 22 nm 22 nm 14 nm TOCK TICK TOCK TICK TOCK TICK TOCK TICK (2006) (2007) (2008) (2010) (2011) (2012) (2013) (2014) Figure : Intel’s Tick-Tock development model (Based on [1]) * 3.1 Introduction to the 1.
    [Show full text]
  • Parallel Computing 2Nd Edition by Grama Et
    High Performance Computing Course #: CSI 441/541 High Perf Sci Comp II Spring ‘10 Mark R. Gilder Email: [email protected] [email protected] CSI 441/541 This course investigates the latest trends in high-performance computing (HPC) evolution and examines key issues in developing algorithms capable of exploiting these architectures. Grading: Your grade in the course will be based on completion of assignments (40%), course project (35%), class presentation(15%), class participation (10%). Course Goals Understanding of the latest trends in HPC architecture evolution, Appreciation for the complexities in efficiently mapping algorithms onto HPC architectures, Hands-on experience in design and implementation of algorithms for both shared & distributed memory parallel architectures using MPI, Experience in evaluating performance of parallel programs. Mark R. Gilder CSI 441/541 – SUNY Albany Spring '10 2 Grades 40% Homework assignments 35% Final project 15% Class presentations 10% Class participation Mark R. Gilder CSI 441/541 – SUNY Albany Spring '10 3 Homework Usually weekly w/some exceptions Must be turned in on time – no late homework assignments will be accepted All work must be your own - cheating will not be tolerated All references must be sited Assignments may consist of problems, programming, or a combination of both Mark R. Gilder CSI 441/541 – SUNY Albany Spring '10 4 Homework (continued) Detailed discussion of your results is expected ◦ the actual program is only a small part of the problem Homework assignments will be posted on the class website along with all of the lecture notes ◦ http://www.cs.albany.edu/~gilder Mark R. Gilder CSI 441/541 – SUNY Albany Spring '10 5 Project Topic of general interest to the course Read three or four papers to get some ideas of latest research in HPC field Identify an application that can be implemented on our RIT cluster Implement and write a final report describing your application and results Present your results in class Mark R.
    [Show full text]
  • Dual-Core Intel® Itanium® 2 Processor: Reference Manual Update
    Dual-Core Update to the Intel® Itanium® 2 Processor Reference Manual For Software Development and Optimization Revision 0.9 January 2006 Document Number: 308065-001 Notice: This document contains information on products in the design phase of development. The information here is subject to change without notice. Do not finalize a design with this information. INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL® PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. Intel products are not intended for use in medical, life saving, or life sustaining applications. Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked “reserved” or “undefined.” Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The Itanium 2 processor may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. The code name “Montecito” presented in this document is only for use by Intel to identify a product, technology, or service in development, that has not been made commercially available to the public, i.e., announced, launched or shipped.
    [Show full text]
  • An Outlook of Technology Scaling Beyond Moore's Law
    An Outlook of Technology Scaling Beyond Moore's Law Adrian M. Ionescu Ecole Polytechnique Fédérale de Lausanne Adrian Ionescu, October 2005 1 Summary • Introduction: Moore’s law… • More Moore, Beyond CMOS, More-than-More • Fundamental limits • Evolutionary and non-classical MOSFET (More Moore…) • Emerging nanoelectronics (… beyond CMOS) Single/few electron electronics & QCA Nanowires & carbon nanotubes Molecular electronics Spintronics • Conclusion Adrian Ionescu, October 2005 2 40 years of Moore’s law: 1965 - 2005 • 1965: a single transistor cost more than a dollar • 1975: the cost of a transistor had dropped to less than a penny, while transistor size allowed for almost 100,000 transistors on a single die • 1979 to 1999, processor performance went from about 1.5 million instructions per second (MIPS), to almost 50 MIPS on the i486™, to over 1,000 MIPS on the Intel® Pentium® III • Today's Intel® processors run at 3.2 GHz and higher, deliver over 10,000 MIPS, and can be manufactured in high volumes with transistors that cost less than 1/10,000th of a cent Adrian Ionescu, October 2005 3 How it started… Adrian Ionescu, October 2005 4 … and where we are… Source: http://www.intel.com/research/silicon/mooreslaw.htm Adrian Ionescu, October 2005 5 90 nm Intel’s processor First planar integrated Montecito (2004) circuit (1961) Itanium Processor Family Transistors: 1.72 Billion Frequency: >1.7GHz Power: ~100W Source: Intel Developer Forum, September, 2004 Adrian Ionescu, October 2005 6 Source: M. Bohr, Intel Development Forum, September 2004.
    [Show full text]