Grids to Petaflops
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Average Memory Access Time: Reducing Misses
Review: Cache performance 332 Miss-oriented Approach to Memory Access: Advanced Computer Architecture ⎛ MemAccess ⎞ CPUtime = IC × ⎜ CPI + × MissRate × MissPenalt y ⎟ × CycleTime Chapter 2 ⎝ Execution Inst ⎠ ⎛ MemMisses ⎞ CPUtime = IC × ⎜ CPI + × MissPenalt y ⎟ × CycleTime Caches and Memory Systems ⎝ Execution Inst ⎠ CPIExecution includes ALU and Memory instructions January 2007 Separating out Memory component entirely Paul H J Kelly AMAT = Average Memory Access Time CPIALUOps does not include memory instructions ⎛ AluOps MemAccess ⎞ These lecture notes are partly based on the course text, Hennessy CPUtime = IC × ⎜ × CPI + × AMAT ⎟ × CycleTime and Patterson’s Computer Architecture, a quantitative approach (3rd ⎝ Inst AluOps Inst ⎠ and 4th eds), and on the lecture slides of David Patterson and John AMAT = HitTime + MissRate × MissPenalt y Kubiatowicz’s Berkeley course = ()HitTime Inst + MissRate Inst × MissPenalt y Inst + ()HitTime Data + MissRate Data × MissPenalt y Data Advanced Computer Architecture Chapter 2.1 Advanced Computer Architecture Chapter 2.2 Average memory access time: Reducing Misses Classifying Misses: 3 Cs AMAT = HitTime + MissRate × MissPenalt y Compulsory—The first access to a block is not in the cache, so the block must be brought into the cache. Also called cold start misses or first reference misses. There are three ways to improve cache (Misses in even an Infinite Cache) Capacity—If the cache cannot contain all the blocks needed during performance: execution of a program, capacity misses will occur due to blocks being discarded and later retrieved. (Misses in Fully Associative Size X Cache) 1. Reduce the miss rate, Conflict—If block-placement strategy is set associative or direct mapped, conflict misses (in addition to compulsory & capacity misses) will 2. -
Teragrid Evaluation Report Project Findings August 2008
Report from the TeraGrid Evaluation Study, Part 1: Project Findings Ann Zimmerman Thomas A. Finholt August 2008 Collaboratory for Research on Electronic Work School of Information University of Michigan 1075 Beal Avenue Ann Arbor, MI 48109-2112 http://www.crew.umich.edu Table of Contents Executive Summary ............................................................................................................ 1 1. Introduction ..................................................................................................................... 4 2. Research Questions ......................................................................................................... 5 3. Research Methods ........................................................................................................... 5 3.1 Data Collection ......................................................................................................... 6 3.1.1 User Workshop .................................................................................................. 6 3.1.2 Site Visits ........................................................................................................... 6 3.1.3 Interviews ........................................................................................................... 6 TeraGrid Users ........................................................................................................ 8 TeraGrid and Centers Personnel ........................................................................... 10 Non-TeraGrid -
Mips 16 Bit Instruction Set
Mips 16 bit instruction set Continue Instruction set architecture MIPSDesignerMIPS Technologies, Imagination TechnologiesBits64-bit (32 → 64)Introduced1985; 35 years ago (1985)VersionMIPS32/64 Issue 6 (2014)DesignRISCTypeRegister-RegisterEncodingFixedBranchingCompare and branchEndiannessBiPage size4 KBExtensionsMDMX, MIPS-3DOpenPartly. The R12000 has been on the market for more than 20 years and therefore cannot be subject to patent claims. Thus, the R12000 and old processors are completely open. RegistersGeneral Target32Floating Point32 MIPS (Microprocessor without interconnected pipeline stages) is a reduced setting of the Computer Set (RISC) Instruction Set Architecture (ISA):A-3:19, developed by MIPS Computer Systems, currently based in the United States. There are several versions of MIPS: including MIPS I, II, III, IV and V; and five MIPS32/64 releases (for 32- and 64-bit sales, respectively). The early MIPS architectures were only 32-bit; The 64-bit versions were developed later. As of April 2017, the current version of MIPS is MIPS32/64 Release 6. MiPS32/64 differs primarily from MIPS I-V, defining the system Control Coprocessor kernel preferred mode in addition to the user mode architecture. The MIPS architecture has several additional extensions. MIPS-3D, which is a simple set of floating-point SIMD instructions dedicated to common 3D tasks, MDMX (MaDMaX), which is a more extensive set of SIMD instructions using 64-bit floating current registers, MIPS16e, which adds compression to flow instructions to make programs that take up less space, and MIPS MT, which adds layered potential. Computer architecture courses in universities and technical schools often study MIPS architecture. Architecture has had a major impact on later RISC architectures such as Alpha. -
The Neutron Science Teragrid Gateway, a Teragrid Science Gateway to Support the Spallation Neutron Source
The Neutron Science TeraGrid Gateway, a TeraGrid Science Gateway to Support the Spallation Neutron Source John W. Cobb*, Al Geist*, James A. Kohl*, Stephen D. Miller‡, Peter F. Peterson‡, Gregory G. Pike*, Michael A. Reuter‡, Tom Swain†, Sudharshan S. Vazhkudai∗ , Nithya N. Vijayakumar $ SUMMARY The National Science Foundation’s (NSF’s) Extensible Terascale Facility (ETF), or TeraGrid [1] is entering its operational phase. An ETF science gateway effort is the Neutron Science TeraGrid Gateway (NSTG.) The Oak Ridge National Laboratory (ORNL) resource provider effort (ORNL-RP) during construction and now in operations is bridging a large scale experimental community and the TeraGrid as a large-scale national cyberinfrastructure. Of particular emphasis is collaboration with the Spallation Neutron Source (SNS) at ORNL. The U.S. Department of Energy’s (USDOE’s) SNS [2] at ORNL will be commissioned in spring of 2006 as the world’s brightest source of neutrons. Neutron science users can run experiments; generate datasets; perform data reduction, analysis, visualize results; collaborate with remotes users; and archive long term data in repositories with curation services. The ORNL-RP and the SNS data analysis group have spent 18 months developing and exploring user requirements, including the creation of prototypical services such as facility portal, data, and application execution services. We describe results from these efforts and discuss implications for science gateway creation. Finally, we show incorporation into implementation planning for the NSTG and SNS architectures. The plan is for a primarily portal-based user interaction supported by a service oriented architecture for functional implementation. KEY WORDS: Portal, Neutron Scattering, TeraGrid, Science Gateway, Service Architecture, Grid 1. -
A Programming Model and Processor Architecture for Heterogeneous Multicore Computers
A PROGRAMMING MODEL AND PROCESSOR ARCHITECTURE FOR HETEROGENEOUS MULTICORE COMPUTERS A DISSERTATION SUBMITTED TO THE DEPARTMENT OF ELECTRICAL ENGINEERING AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY Michael D. Linderman February 2009 c Copyright by Michael D. Linderman 2009 All Rights Reserved ii I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. (Professor Teresa H. Meng) Principal Adviser I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. (Professor Mark Horowitz) I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. (Professor Krishna V. Shenoy) Approved for the University Committee on Graduate Studies. iii Abstract Heterogeneous multicore computers, those systems that integrate specialized accelerators into and alongside multicore general-purpose processors (GPPs), provide the scalable performance needed by computationally demanding information processing (informatics) applications. However, these systems often feature instruction sets and functionality that significantly differ from GPPs and for which there is often little or no sophisticated compiler support. Consequently developing applica- tions for these systems is difficult and developer productivity is low. This thesis presents Merge, a general-purpose programming model for heterogeneous multicore systems. The Merge programming model enables the programmer to leverage different processor- specific or application domain-specific toolchains to create software modules specialized for differ- ent hardware configurations; and provides language mechanisms to enable the automatic mapping of processor-agnostic applications to these processor-specific modules. -
David a Lifka 8-2019
DAVID A. LIFKA 234 Day Hall, Cornell University, Ithaca, NY 14853-3801 607-254-8621 – cell 607-227-5517 – [email protected] Executive Profile Cornell University Vice President and Chief Information Officer. IT leader and business strategist with strong leadership, team building, and Problem-solving skills. Develops, executes, and enhances a sustainable, cutting-edge IT organization that drives down costs, operates efficiently, and transforms education, research, and Public service by oPening new avenues for collaboration and innovation. ExPert in develoPing and dePloying optimal IT solutions defined by organizational priorities, strategic value, customer needs, and budget constraints. Over thirty years experience in administrative and research computing as a center director and lead technologist at an Ivy League university and national research laboratory. Invited industry speaker on emerging technologies and best business practices. Active member of the University Board of Trustees Cyber Security subcommittee of the Audit Committee, camPus IT committees, advisory boards, and other leadership positions. Federal and industry grant awardee with publications and ComPuting and Information Science teaching experience. Skills Analyzing organizational challenges and identifying opportunities; building relationships with key stakeholders in order to identify and define high-value needs; managing Projects; motivating emPloyees; defining and meeting fiscal goals; re-engineering IT functions; dealing with performance issues and celebrating successes; building mutually beneficial vendor relations; imPlementing commodity and emerging technologies to meet research and administrative comPuting needs in a timely and transparent manner. Technical skills include: solution architecture and implementation, interconnects, data management and analysis, cloud and grid comPuting, high performance computing, compilers, parallel and object-oriented programming languages, resource management, and high-speed networking. -
Grid Computing and the Teragrid GI Science Gateway
Grid Computing and the TeraGrid GI Science Gateway Kate Cowles 22S:295 High Performance Computing Seminar, Nov. 29, 2007 () 1 / 38 Outline 1 Example data 2 Grid computing 3 The TeraGrid 4 TeraGrid Science Gateways 5 GISolve () 2 / 38 Example data Outline 1 Example data 2 Grid computing 3 The TeraGrid 4 TeraGrid Science Gateways 5 GISolve () 3 / 38 Example data Sulfur dioxide concentration data from the EPA Air Quality System database () 4 / 38 Example data n = 4711 observations from 870 sites in the years 1998–2005 () 5 / 38 Grid computing Outline 1 Example data 2 Grid computing 3 The TeraGrid 4 TeraGrid Science Gateways 5 GISolve () 6 / 38 Grid computing Grid computing several different definitions, all involving distributed computing networking together computing clusters at different geographic locations to harness computing and storage resources () 7 / 38 The TeraGrid Outline 1 Example data 2 Grid computing 3 The TeraGrid 4 TeraGrid Science Gateways 5 GISolve () 8 / 38 The TeraGrid The Teragrid the world’s largest, most comprehensive distributed cyberinfrastructure for open scientific research began in 2001 when NSF awarded $45 million to establish a Distributed Terascale Facility (DTF) to NCSA, SDSC, Argonne National Laboratory, and the Center for Advanced Computing Research (CACR) at California Institute of Technology coordinated through the Grid Infrastructure Group (GIG) at the University of Chicago () 9 / 38 The TeraGrid TeraGrid, continued more than 250 teraflops of computing capability (May 2007) teraflop: trillion (1012) floating point operations per second more than 30 petabytes of online and archival data storage (May 2007) petabyte: quadrillion (1015) bytes rapid access and retrieval over high-performance networks. -
Production Cluster Visualization: Experiences and Challenges
Workshop on Parallel Visualization and Graphics Production Cluster Visualization: Experiences and Challenges Randall Frank VIEWS Visualization Project Lead Lawrence Livermore National Laboratory UCRL-PRES-200237 Workshop on Parallel Visualization and Graphics Why COTS Distributed Visualization Clusters? The realities of extreme dataset sizes (10TB+) • Stored with the compute platform • Cannot afford to copy the data • Co-resident visualization Track compute platform trends • Distributed infrastructure • Commodity hardware trends • Cost-effective solutions Migration of graphics leadership • The PC (Gamers) Desktops • Display technologies • HDR, resolution, stereo, tiled, etc Workshop on Parallel Visualization and Graphics 2 Production Visualization Requirements Unique, aggresive I/O requirements • Access patterns/performance Generation of graphical primitives • Graphics computation: primitive extraction/computation • Dataset decomposition (e.g. slabs vs chunks) Rendering of primitives • Aggregation of multiple rendering engines Video displays • Routing of digital or video tiles to displays (over distance) Interactivity (not a render-farm!) • Real-time imagery • Interaction devices, human in the loop (latency, prediction issues) Scheduling • Systems and people Workshop on Parallel Visualization and Graphics 3 Visualization Environment Architecture Archive PowerWall Analog Simulation Video Data Switch GigE Switch Visualization Engine Data Manipulation Engine Offices • Raw data on platform disks/archive systems • Data manipulation -
Introduction to Playstation®2 Architecture
Introduction to PlayStation®2 Architecture James Russell Software Engineer SCEE Technology Group In this presentation ä Company overview ä PlayStation 2 architecture overview ä PS2 Game Development ä Differences between PS2 and PC. Technology Group 1) Sony Computer Entertainment Overview SCE Europe (includes Aus, NZ, Mid East, America Technology Group Japan Southern Africa) Sales ä 40 million sold world-wide since launch ä Since March 2000 in Japan ä Since Nov 2000 in Europe/US ä New markets: Middle East, India, Korea, China ä Long term aim: 100 million within 5 years of launch ä Production facilities can produce 2M/month. Technology Group Design considerations ä Over 5 years, we’ll make 100,000,000 PS2s ä Design is very important ä Must be inexpensive (or should become that way) ä Technology must be ahead of the curve ä Need high performance, low price. Technology Group How to achieve this? ä Processor yield ä High CPU clock speed means lower yields ä Solution? ä Low CPU clock speed, but high parallelism ä Nothing readily available ä SCE designs custom chips. Technology Group 2) Technical Aspects of PlayStation 2 ä 128-bit CPU core “Emotion Engine” ä + 2 independent Vector Units ä + Image Processing Unit (for MPEG) ä GS - “Graphics Synthesizer” GPU ä SPU2 - Sound Processing Unit ä I/O Processor (CD/DVD, USB, i.Link). Technology Group “Emotion Engine” - Specifications ä CPU Core 128 bit CPU ä System Clock 300MHz ä Bus Bandwidth 3.2GB/sec ä Main Memory 32MB (Direct Rambus) ä Floating Point Calculation 6.2 GFLOPS ä 3D Geometry Performance 66 Million polygons/sec. -
Placing Landmarks on the Genome
What is XSEDE? The Extreme ScienceXSEDE and Engineering Discovery The five-year, $121 million project is supported by XSEDE is led by the University of Illinois’s National Environment (XSEDE) is the most advanced, powerful, the National Science Foundation. It replaces and Center for Supercomputing Applications. and robust collection of integrated digital resources expands on the National Science Foundation TeraGrid The partnership includes: and services in the world. It is a single virtual system project. More than 10,000 scientists used the TeraGrid • Carnegie Mellon University/Pittsburgh that scientists can use to interactively share to complete thousands of research projects, at no Supercomputing Center - University of Pittsburgh computing resources, data, and expertise. cost to the scientists. XSEDE continues that same • Center for Advanced Computing - Cornell University sort of work—with an expanded scope, generating • Indiana University • Jülich Supercomputing Centre more knowledge, and improving our world in an even • National Center for Atmospheric Research broader range of fields. • Ohio Supercomputer Center - The Ohio State University • Purdue University • Rice University • Shodor Education Foundation • Southeastern Universities Research Association • University of California Berkeley • San Diego Supercomputer Center - University of California San Diego • University of Chicago • National Center for Supercomputing Applications - University of Illinois at Urbana-Champaign • National Institute for Computational Sciences - University -
Sony's Emotionally Charged Chip
VOLUME 13, NUMBER 5 APRIL 19, 1999 MICROPROCESSOR REPORT THE INSIDERS’ GUIDE TO MICROPROCESSOR HARDWARE Sony’s Emotionally Charged Chip Killer Floating-Point “Emotion Engine” To Power PlayStation 2000 by Keith Diefendorff rate of two million units per month, making it the most suc- cessful single product (in units) Sony has ever built. While Intel and the PC industry stumble around in Although SCE has cornered more than 60% of the search of some need for the processing power they already $6 billion game-console market, it was beginning to feel the have, Sony has been busy trying to figure out how to get more heat from Sega’s Dreamcast (see MPR 6/1/98, p. 8), which has of it—lots more. The company has apparently succeeded: at sold over a million units since its debut last November. With the recent International Solid-State Circuits Conference (see a 200-MHz Hitachi SH-4 and NEC’s PowerVR graphics chip, MPR 4/19/99, p. 20), Sony Computer Entertainment (SCE) Dreamcast delivers 3 to 10 times as many 3D polygons as and Toshiba described a multimedia processor that will be the PlayStation’s 34-MHz MIPS processor (see MPR 7/11/94, heart of the next-generation PlayStation, which—lacking an p. 9). To maintain king-of-the-mountain status, SCE had to official name—we refer to as PlayStation 2000, or PSX2. do something spectacular. And it has: the PSX2 will deliver Called the Emotion Engine (EE), the new chip upsets more than 10 times the polygon throughput of Dreamcast, the traditional notion of a game processor. -
Open Cybergis Software for Geospatial Research and Education in the Big Data Era
View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Available online at www.sciencedirect.com ScienceDirect SoftwareX 5 (2016) 1–5 www.elsevier.com/locate/softx Open cyberGIS software for geospatial research and education in the big data era Shaowen Wanga,b,c,d,e,f,∗, Yan Liua,b,c,f, Anand Padmanabhana,b,c,f a CyberGIS Center for Advanced Digital and Spatial Studies, University of Illinois at Urbana–Champaign, Urbana, IL 61801, USA b CyberInfrastructure and Geospatial Information Laboratory, University of Illinois at Urbana–Champaign, Urbana, IL 61801, USA c Department of Geography and Geographic Information Science, University of Illinois at Urbana–Champaign, Urbana, IL 61801, USA d Department of Urban and Regional Planning, University of Illinois at Urbana–Champaign, Urbana, IL 61801, USA e Graduate School of Library and Information Science, University of Illinois at Urbana–Champaign, Urbana, IL 61801, USA f National Center for Supercomputing Applications, University of Illinois at Urbana–Champaign, Urbana, IL 61801, USA Received 16 March 2015; received in revised form 3 July 2015; accepted 28 October 2015 Abstract CyberGIS represents an interdisciplinary field combining advanced cyberinfrastructure, geographic information science and systems (GIS), spatial analysis and modeling, and a number of geospatial domains to improve research productivity and enable scientific breakthroughs. It has emerged as new-generation GIS that enable unprecedented advances in data-driven knowledge discovery, visualization and visual analytics, and collaborative problem solving and decision-making. This paper describes three open software strategies – open access, source, and integration – to serve various research and education purposes of diverse geospatial communities.