Embedded Support Partner User Guide

Total Page:16

File Type:pdf, Size:1020Kb

Embedded Support Partner User Guide Embedded Support Partner User Guide Document Number 007-4065-009 CONTRIBUTORS Revised by Darrin Goss Production by Karen Jacobson Engineering contributions by the System and Site Support Tools Group COPYRIGHT © 1999, 2000, 2001, 2002, 2003 Silicon Graphics, Inc. — All Rights Reserved This document contains proprietary and confidential information of SGI. The contents of this document may not be disclosed to third parties, copied, or duplicated in any form, in whole or in part, without the prior written permission of SGI. LIMITED RIGHTS LEGEND Use, duplication, or disclosure of the technical data contained in this document by the Government is subject to restrictions as set forth in subdivision (c) (1) (ii) of the Rights in Technical Data and Computer Software clause at DFARS 52.227-7013 and/or in similar or successor clauses in the FAR, or in the DOD or NASA FAR Supplement. Unpublished rights reserved under the Copyright Laws of the United States. Contractor/manufacturer is SGI, 1500 Crittenden Lane, Mountain View, CA 94043. TRADEMARKS AND ATTRIBUTIONS Silicon Graphics, SGI, Challenge, InPerson, IRIX, O2, Octane, Onyx, Onyx2, Origin, and the SGI logo are registered trademarks, and Altix, CASEVision, Key-O-Matic, Performance Co-Pilot, and Supportfolio are trademarks of Silicon Graphics, Inc., in the United States and/or other countries worldwide. CrayLink is a trademark of Cray, Inc. Linux is a registered trademark of Linus Torvalds, used with permission by Silicon Graphics, Inc. MIPS is a trademark of MIPS Technologies, Inc., used under license by Silicon Graphics, Inc. Netscape is a trademark of Netscape Communications Corporation. UNIX is a registered trademark and X Window System is a trademark of The Open Group. U.S. Robotics and Sportster are trademarks of 3Com Corporation. All other trademarks are the property of their respective owners. Embedded Support Partner User Guide Document Number 007-4065-009 Contents List of Figures xi List of Tables xix What’s New in this Document xxi 1. Introduction 1 Distribution 3 Base Package 3 Extended Package 4 Named Groups 6 Full and Light Nodes 7 TCP/IP Protocol 9 Group Management Over Hierarchies 9 Simplified Group Management Configuration 11 Enhanced Configuration for SGM Clients 11 Central Logbook Capability 11 ESP Benefits 12 ESP Architecture 14 Core Software 19 System Support Database (SSDB) 19 ESP and SGM DSOs 19 Monitoring Software 21 Configuration Monitoring 21 Event Monitoring 22 Availability Monitoring 25 Notification Software 26 Console Software 28 Web-based Interface 28 iii Contents Command Line Interface 29 External Tools 30 Performance Monitoring Tools 30 Diagnostic Tools 31 RAID Monitoring Tools 31 Remote Support Capability 31 Security Features 32 System Performance Impact of ESP 33 2. Accessing ESP 35 Using the Command Line Interface 35 Using the Web-based Interface 42 Opening a URL in a Web Browser 44 Using the Embedded_Support_Partner Icon (ESP for the IRIX OS Only) 49 Using the launchESPartner Command (ESP for the IRIX OS Only) 55 Configuring Single System Management 59 Configuring Group Management 60 3. Administering ESP 63 Setting Up the Customer Profile 64 Using the Web-based Interface 64 Using the Command Line Interface 67 Setting Up the Network Permissions 68 Using the Web-based Interface 68 Using the Command Line Interface 70 Setting Up the User Permissions 71 Viewing the Current Users 71 Using the Web-based Interface 71 Using the Command Line Interface 72 Adding a User 73 Using the Web-based Interface 73 Using the Command Line Interface 76 Updating a Password 77 Using the Web-based Interface 77 iv Contents Using the Command Line Interface 79 Updating Permissions for a User 80 Using the Web-based Interface 80 Using the Command Line Interface 83 Deleting a User 85 Using the Web-based Interface 85 Using the Command Line Interface 86 Manipulating Database Archives 87 Using the Web-based Interface 87 Using the Command Line Interface 89 4. Setting Up the ESP Environment 91 Setting Up the System Serial Number (ESP for the Linux OS Only) 92 Setting the System Serial Number (Single System Manager Mode) 93 Setting the System Serial Number (System Group Manager Mode) 95 Setting Up the Global Configuration Parameters 97 Using the Web-based Interface 97 Using the Command Line Interface 102 Setting Up the Paging Parameters (ESP for IRIX OS Only) 105 Setting Up the Modem Parameters (ESP for IRIX OS Only) 107 Using the Web-based Interface 107 Using the Command Line Interface 109 Setting Up the Paging Service Provider Parameters (ESP for IRIX OS Only) 110 Using the Web-based Interface 110 Using the Command Line Interface 112 Setting Up the Paging Parameters (ESP for the IRIX OS Only) 112 Using the Web-based Interface 112 Using the Command Line Interface 113 Setting Up the System Parameters (Single System Manager Mode Only) 114 Setting Up the System/Client Parameters (System Group Manager Mode Only) 116 Adding a New SGM Client 116 Updating the System or a Client 122 Updating the SGM Server 123 Updating an ESP 3.0 SGM Client 125 v Contents Updating an ESP 2.0 SGM Client 129 Unsubscribing SGM Clients 131 Setting Up the Authentication Password 133 Adding a Password for a New Server 133 Updating the Password for an Existing Server 134 Using the Command Line Interface to Configure SGM Settings 135 Importing and Exporting ESP Environments 137 5. Configuring ESP 139 Configuring Events 139 Managing Event Profiles 140 Using the Web-based Interface 140 Using the Command Line Interface 143 Viewing Event Classes and Events 145 Adding Events 146 Using the Web-based Interface 146 Using the Command Line Interface 163 Updating Events 164 Using the Web-based Interface 164 Using the Command Line Interface 171 Updating Multiple Events at the Same Time (Batch Updating) 173 Using the Web-based Interface 173 Using the Command Line Interface 177 Deleting Events 178 Using the Web-based Interface 178 Using the Command Line Interface 180 Subscribing Events from SGM Clients 182 Using the Web-based Interface 182 Using the Command Line Interface 186 Configuring Actions 187 Viewing the Existing Actions 187 Adding Actions 188 Using the Web-based Interface 188 vi Contents Using the Command Line Interface 200 Updating Actions 201 Using the Web-based Interface 201 Using the Command Line Interface 205 Disabling and Enabling Actions 206 Using the Web-based Interface 206 Using the Command Line Interface 207 Configuring Performance Monitoring 208 Using the Web-based Interface 208 Using the Command Line Interface 215 Configuring System Monitoring 216 Using the Web-based Interface (Single System Manager Mode) 216 Using the Web-based Interface (System Group Manager Mode) 220 Using the Command Line Interface 222 6. Viewing Reports 225 About Reports 225 Events Registered Reports 229 Using the Web-based Interface (Single System Manager Mode) 229 Using the Web-based Interface (System Group Manager Mode) 235 Using the Command Line Interface 241 Actions Taken Reports 242 Using the Web-based Interface (Single System Manager Mode) 242 Using the Web-based Interface (System Group Manager Mode) 244 Using the Command Line Interface 246 Availability Reports 247 Using the Web-based Interface (Single System Manager Mode) 247 Using the Web-based Interface (System Group Manager Mode) 250 Using the Command Line Interface 253 Diagnostic Result Reports 254 Using the Web-based Interface (Single System Manager Mode) 254 Using the Web-based Interface (System Group Manager Mode) 256 Using the Command Line Interface 258 vii Contents Hardware Reports 259 Hardware Inventory Reports 259 Using the Web-based Interface (Single System Manager Mode) 259 Using the Web-based Interface (System Group Manager Mode) 262 Using the Command Line Interface 265 Hardware Changes Reports 266 Using the Web-based Interface (Single System Manager Mode) 266 Using the Web-based Interface (System Group Manager Mode) 268 Using the Command Line Interface 270 Software Reports 271 Software Inventory Reports 271 Using the Web-based Interface (Single System Manager Mode) 271 Using the Web-based Interface (System Group Manager Mode) 275 Using the Command Line Interface 277 Software Changes Reports 278 Using the Web-based Interface (Single System Manager Mode) 278 Using the Web-based Interface (System Group Manager Mode) 280 Using the Command Line Interface 281 System Reports 282 System Inventory Reports 282 Using the Web-based Interface 282 Using the Command Line Interface 285 System Changes Reports 286 Using the Web-based Interface (Single System Manager Mode) 286 Using the Web-based Interface (System Group Manager Mode) 288 Using the Command Line Interface 289 Site Reports (System Group Manager Mode Only) 290 Using the Command Line Interface 292 7. Using the ESP Logbook 293 About the ESP Logbook 293 viii Contents Viewing Logbook Entries 293 Using the Web-based Interface (Single System Manager Mode) 293 Using the Web-based Interface (System Group Manager Mode) 295 Using the Command Line Interface 297 Adding a Logbook Entry 298 Using the Web-based Interface (Single System Manager Mode) 298 Using the Web-based Interface (System Group Manager Mode) 300 Using the Command Line Interface 303 8. Sending Notifications 305 About the espnotify Tool 305 Command Line Options for Displaying a Message on the Console 305 Displaying a Message on an X Window System Display 306 Sending an E-mail Message 308 Invoking espnotify from ESP 309 Example: Creating an Action to Send an E-mail 309 9. Logging Events from Applications and Scripts 313 Event Classification and Sequence Numbers 313 Using the Event Manager API 314 Using the emgrlogger and esplogger Tools 314 Example 1 316 Example 2 316 10. Default Event Classes and Types 317 ESP for the Linux OS 317 Default Event Classes 317 Default Event Types 318 ESP for the IRIX OS 321 Default Event Classes 321 Default Event Types 323 11.
Recommended publications
  • 3Dfx Oral History Panel Gordon Campbell, Scott Sellers, Ross Q. Smith, and Gary M. Tarolli
    3dfx Oral History Panel Gordon Campbell, Scott Sellers, Ross Q. Smith, and Gary M. Tarolli Interviewed by: Shayne Hodge Recorded: July 29, 2013 Mountain View, California CHM Reference number: X6887.2013 © 2013 Computer History Museum 3dfx Oral History Panel Shayne Hodge: OK. My name is Shayne Hodge. This is July 29, 2013 at the afternoon in the Computer History Museum. We have with us today the founders of 3dfx, a graphics company from the 1990s of considerable influence. From left to right on the camera-- I'll let you guys introduce yourselves. Gary Tarolli: I'm Gary Tarolli. Scott Sellers: I'm Scott Sellers. Ross Smith: Ross Smith. Gordon Campbell: And Gordon Campbell. Hodge: And so why don't each of you take about a minute or two and describe your lives roughly up to the point where you need to say 3dfx to continue describing them. Tarolli: All right. Where do you want us to start? Hodge: Birth. Tarolli: Birth. Oh, born in New York, grew up in rural New York. Had a pretty uneventful childhood, but excelled at math and science. So I went to school for math at RPI [Rensselaer Polytechnic Institute] in Troy, New York. And there is where I met my first computer, a good old IBM mainframe that we were just talking about before [this taping], with punch cards. So I wrote my first computer program there and sort of fell in love with computer. So I became a computer scientist really. So I took all their computer science courses, went on to Caltech for VLSI engineering, which is where I met some people that influenced my career life afterwards.
    [Show full text]
  • PACKET 7 BOOKSTORE 433 Lecture 5 Dr W IBM OVERVIEW
    “PROCESSORS” and multi-processors Excerpt from Hennessey Computer Architecture book; edits by JT Wunderlich PhD Plus Dr W’s IBM Research & Development: JT Wunderlich PhD “PROCESSORS” Excerpt from Hennessey Computer Architecture book; edits by JT Wunderlich PhD Historical Perspective and Further 7.14 Reading There is a tremendous amount of history in multiprocessors; in this section we divide our discussion by both time period and architecture. We start with the SIMD SIMD=SinGle approach and the Illiac IV. We then turn to a short discussion of some other early experimental multiprocessors and progress to a discussion of some of the great Instruction, debates in parallel processing. Next we discuss the historical roots of the present multiprocessors and conclude by discussing recent advances. Multiple Data SIMD Computers: Attractive Idea, Many Attempts, No Lasting Successes The cost of a general multiprocessor is, however, very high and further design options were considered which would decrease the cost without seriously degrading the power or efficiency of the system. The options consist of recentralizing one of the three major components. Centralizing the [control unit] gives rise to the basic organization of [an] . array processor such as the Illiac IV. Bouknight, et al.[1972] The SIMD model was one of the earliest models of parallel computing, dating back to the first large-scale multiprocessor, the Illiac IV. The key idea in that multiprocessor, as in more recent SIMD multiprocessors, is to have a single instruc- tion that operates on many data items at once, using many functional units (see Figure 7.14.1). Although successful in pushing several technologies that proved useful in later projects, it failed as a computer.
    [Show full text]
  • Online Sec 6.15.Indd
    6.155.9 Historical Perspective and Further Reading Th ere is a tremendous amount of history in multiprocessors; in this section we divide our discussion by both time period and architecture. We start with the SIMD approach and the Illiac IV. We then turn to a short discussion of some other early experimental multiprocessors and progress to a discussion of some of the great debates in parallel processing. Next we discuss the historical roots of the present multiprocessors and conclude by discussing recent advances. SIMD Computers: Attractive Idea, Many Attempts, No Lasting Successes Th e cost of a general multiprocessor is, however, very high and further design options were considered which would decrease the cost without seriously degrading the power or effi ciency of the system. Th e options consist of recentralizing one of the three major components. Centralizing the [control unit] gives rise to the basic organization of [an] . array processor such as the Illiac IV. Bouknight et al. [1972] Th e SIMD model was one of the earliest models of parallel computing, dating back to the fi rst large-scale multiprocessor, the Illiac IV. Th e key idea in that multiprocessor, as in more recent SIMD multiprocessors, is to have a single instruction that operates on many data items at once, using many functional units (see Figure 6.15.1). Although successful in pushing several technologies that proved useful in later projects, it failed as a computer. Costs escalated from the $8 million estimate in 1966 to $31 million by 1972, despite construction of only a quarter of the planned multiprocessor.
    [Show full text]
  • Hive: Fault Containment for Shared-Memory Multiprocessors
    Hive: Fault Containment for Shared-Memory Multiprocessors John Chapin, Mendel Rosenblum, Scott Devine, Tirthankar Lahiri, Dan Teodosiu, and Anoop Gupta Computer Systems Laboratory Stanford University, Stanford CA 94305 htkp: //www-flash. skanforcl. edu Abstract In this paper we describe Hive, an operahng system designed for large-scale shared-memory multiprocessors. Hive is Reliability and scalability are major concerns when designing fundamentally different from previous monolithic and microkernel operating systems for large-scale shared-memory multiprocessors. SMP OS implementations: it IS structured as an internal distributed In this paper we describe Hive, an operating system with a novel system of independent kernels called ce/ls. This multicellular kernel architecture that addresses these issues Hive is structured kernel architecture has two main advantages: as an internal distributed system of independent kernels called ● Re[mbiltry: In SMP OS implementations, any significant cells. This improves reliabihty because a hardwme or software hardware or software fault causes the entire system to crash. fault damages only one cell rather than the whole system, and For large-scale machines this can result in an unacceptably low improves scalability because few kernel resources are shared by mean time to failure. In Hive, only the cell where the fault processes running on different cells. The Hive prototype is a occurred crashes, so only the processes using the resources of complete implementation of UNIX SVR4 and is targeted to run on that cell are affected. This is especially beneficial for compute the Stanford FLASH multiprocessor. server workloads where there are multiple independent This paper focuses on Hive’s solutlon to the following key processes.
    [Show full text]
  • Kingston Memory in Your System Contents
    KINGSTON MEMORY IN YOUR SYSTEM CONTENTS page Introduction 2 1 System Specific Memory 2 1.1 Kingston Technology memory and your system’s warranty 2 1.1.1 Using Kingston Technology memory means your system warranty remains valid 2 1.1.2 Letter of confirmation from the Vice President, European Sales and Marketing 3 2 Kingston Technology Memory in Specific Manufacturers’ Systems 4 2.1 Kingston Technology memory in Compaq systems 4 2.1.1 Compaq recognition 5 2.1.2 Kingston Technology memory and Compaq’s Insight Manager® 5 2.2 Kingston Technology and Hewlett Packard 5 2.2.1 Hewlett Packard’s system warranty remains valid when using Kingston memory 5 2.2.2 Kingston’s partnership with Hewlett Packard 6 2.3 Kingston Technology and Sun Systems 6 2.3.1 Sun’s system warranty remains valid when using Kingston memory 6 2.3.2 Kingston Technology is a member of the Sun Developer Connection 6 2.3.2 Kingston Technology is part of the Cobalt Developer Network 6 3 Service Partnerships 7 3.1 Kingston Technology & SGI 7 3.1.1 Licensing Agreement 7 3.1.2 Kingston Technology’s European Service Agreement with SGI 7 4 Kingston Technology Strategic Relationships 8 4.1 Kingston Technology and Toshiba 8 4.2 Kingston Technology and Dell 9 4.3 Kingston Technology and Apple 9 4.4 Kingston Technology and JEDEC 9 4.5 Kingston Technology and Intel 9 4.6 Kingston Technology and Rambus 10 4.7 Kingston Technology and Microsoft 10 5 OEM Manufacturing 11 5.1 Some details of Kingston Technology’s OEM manufacturing 11 6 Why Kingston Technology Europe Ltd? 12 1 INTRODUCTION Kingston Technology is the world’s leading manufacturer of memory products, supplying more than 2000 products for over 4000 systems.
    [Show full text]
  • Performance of Various Computers Using Standard Linear Equations Software
    ———————— CS - 89 - 85 ———————— Performance of Various Computers Using Standard Linear Equations Software Jack J. Dongarra* Electrical Engineering and Computer Science Department University of Tennessee Knoxville, TN 37996-1301 Computer Science and Mathematics Division Oak Ridge National Laboratory Oak Ridge, TN 37831 University of Manchester CS - 89 - 85 June 15, 2014 * Electronic mail address: [email protected]. An up-to-date version of this report can be found at http://www.netlib.org/benchmark/performance.ps This work was supported in part by the Applied Mathematical Sciences subprogram of the Office of Energy Research, U.S. Department of Energy, under Contract DE-AC05-96OR22464, and in part by the Science Alliance a state supported program at the University of Tennessee. 6/15/2014 2 Performance of Various Computers Using Standard Linear Equations Software Jack J. Dongarra Electrical Engineering and Computer Science Department University of Tennessee Knoxville, TN 37996-1301 Computer Science and Mathematics Division Oak Ridge National Laboratory Oak Ridge, TN 37831 University of Manchester June 15, 2014 Abstract This report compares the performance of different computer systems in solving dense systems of linear equations. The comparison involves approximately a hundred computers, ranging from the Earth Simulator to personal computers. 1. Introduction and Objectives The timing information presented here should in no way be used to judge the overall performance of a computer system. The results reflect only one problem area: solving dense systems of equations. This report provides performance information on a wide assortment of computers ranging from the home-used PC up to the most powerful supercomputers. The information has been collected over a period of time and will undergo change as new machines are added and as hardware and software systems improve.
    [Show full text]
  • Lmbench: Portable Tools for Performance Analysis
    The following paper was originally published in the Proceedings of the USENIX 1996 Annual Technical Conference San Diego, California, January 1996 lmbench: Portable Tools for Performance Analysis Larry McVoy, Silicon Graphics Carl Staelin, Hewlett-Packard Laboratories For more information about USENIX Association contact: 1. Phone: 510 528-8649 2. FAX: 510 548-5738 3. Email: [email protected] 4. WWW URL: http://www.usenix.org lmbench: Portable tools for performance analysis Larry McVoy Silicon Graphics, Inc. Carl Staelin Hewlett-Packard Laboratories Abstract of this benchmark suite obsolete or irrelevant. lmbench is a micro-benchmark suite designed to lmbench is already in widespread use at many focus attention on the basic building blocks of many sites by both end users and system designers. In some common system applications, such as databases, simu- cases, lmbench has provided the data necessary to lations, software development, and networking. In discover and correct critical performance problems almost all cases, the individual tests are the result of that might have gone unnoticed. lmbench uncovered analysis and isolation of a customer’s actual perfor- a problem in Sun’s memory management software that mance problem. These tools can be, and currently are, made all pages map to the same location in the cache, used to compare different system implementations effectively turning a 512 kilobyte (K) cache into a 4K from different vendors. In several cases, the bench- cache. marks have uncovered previously unknown bugs and lmbench measures only a system’s ability to design flaws. The results have shown a strong correla- transfer data between processor, cache, memory, net- tion between memory system performance and overall work, and disk.
    [Show full text]
  • Performance of Various Computers Using Standard Linear Equations Software
    ———————— CS - 89 - 85 ———————— Performance of Various Computers Using Standard Linear Equations Software Jack J. Dongarra* Electrical Engineering and Computer Science Department University of Tennessee Knoxville, TN 37996-1301 Computer Science and Mathematics Division Oak Ridge National Laboratory Oak Ridge, TN 37831 University of Manchester CS - 89 - 85 September 30, 2009 * Electronic mail address: [email protected]. An up-to-date version of this report can be found at http://WWW.netlib.org/benchmark/performance.ps This Work Was supported in part by the Applied Mathematical Sciences subprogram of the Office of Energy Research, U.S. Department of Energy, under Contract DE-AC05-96OR22464, and in part by the Science Alliance a state supported program at the University of Tennessee. 9/30/2009 2 Performance of Various Computers Using Standard Linear Equations Software Jack J. Dongarra Electrical Engineering and Computer Science Department University of Tennessee Knoxville, TN 37996-1301 Computer Science and MatHematics Division Oak Ridge National Laboratory Oak Ridge, TN 37831 University of Manchester September 30, 2009 Abstract This report compares the performance of different computer systems in solving dense systems of linear equations. The comparison involves approximately a hundred computers, ranging from the Earth Simulator to personal computers. 1. Introduction and Objectives The timing information presented here should in no way be used to judge the overall performance of a computer system. The results reflect only one problem area: solving dense systems of equations. This report provides performance information on a wide assortment of computers ranging from the home-used PC up to the most powerful supercomputers. The information has been collected over a period of time and will undergo change as new machines are added and as hardware and software systems improve.
    [Show full text]
  • Performance of Various Computers Using Standard Linear Equations
    CS Performance of Various Computers Using Standard Linear Equations Software Jack J Dongarra Computer Science Department UniversityofTennessee Knoxville TN and Mathematical Sciences Section Oak Ridge National Lab oratory Oak Ridge TN CS Novemb er Electronic mail address dongarracsutkedu An uptodate version of this rep ort can b e found at httpwwwnetliborgb enchmarkp erformanceps This work was supp orted in part by the Applied Mathematical Sciences subprogram of the Oce of Energy Research US Department of Energy under Contract DEACOR and in part by the Science Alliance a state supp orted program at the UniversityofTennessee Performance of Various Computers Using Standard Linear Equations Software Jack J Dongarra Computer Science Department UniversityofTennessee Knoxville TN and Mathematical Sciences Section Oak Ridge National Lab oratory Oak Ridge TN Novemb er Abstract This rep ort compares the p erformance of dierent computer systems in solving dense systems of linear equations The comparison involves approximately a hundred computers ranging from aCray YMP to scientic workstations such as the Ap ollo and Sun to IBM PCs Intro duction and Ob jectives The timing information presented here should in no way b e used to judge the overall p erformance of a computer system The results reect only one problem area solving dense systems of equations This rep ort provides p erformance information on a wide assortment of computers ranging from the homeused PC up to the most powerful sup ercomputers The information has been
    [Show full text]
  • Interconnect-Aware Coherence Protocols for Chip Multiprocessors
    Interconnect-Aware Coherence Protocols for Chip Multiprocessors Liqun Cheng, Naveen Muralimanohar, Karthik Ramani, Rajeev Balasubramonian, John B. Carter School of Computing, University of Utah flegion,naveen,karthikr,rajeev,[email protected] ∗ Abstract However, with communication emerging as a larger power and performance constraint than computation, it may be- Improvements in semiconductor technology have made come necessary to understand and leverage the properties it possible to include multiple processor cores on a single of the interconnect at a higher level. Exposing wire prop- die. Chip Multi-Processors (CMP) are an attractive choice erties to architects enables them to find creative ways to for future billion transistor architectures due to their low exploit these properties. This paper presents a number of design complexity, high clock frequency, and high through- techniques by which coherence traffic within a CMP can put. In a typical CMP architecture, the L2 cache is shared be mapped intelligently to different wire implementations by multiple cores and data coherence is maintained among with minor increases in complexity. Such an approach can private L1s. Coherence operations entail frequent commu- not only improve performance, but also reduce power dissi- nication over global on-chip wires. In future technologies, pation. communication between different L1s will have a significant In a typical CMP, the L2 cache and lower levels of the impact on overall processor performance and power con- memory hierarchy are shared by multiple cores [24, 41]. sumption. On-chip wires can be designed to have different Sharing the L2 cache allows high cache utilization and latency, bandwidth, and energy properties.
    [Show full text]
  • An Overview of Scientific Computing
    An Overview of Scienti c Computing Lloyd Fosdick Elizab eth Jessup Septemb er 28, 1995 High Performance Scienti c Computing University of Colorado at Boulder c Copyright 1995 by the HPSC Group of the University of Colorado The following are memb ers of the HPSC Group of the Department of Computer Science at the University of Colorado at Boulder: Lloyd D. Fosdick Elizab eth R. Jessup Carolyn J. C. Schauble Gitta O. Domik Overview i Contents 1 Intro duction 1 2 Large-scale scienti c problems 2 2.1.1 Computer simulation of the greenhouse e ect ::: :: 6 3 The scienti c computing environment 8 4 Workstations 14 4.1 RISC architecture : :: :: :: :: ::: :: :: :: :: ::: :: 15 4.2 The DEC 5000 workstation : :: ::: :: :: :: :: ::: :: 17 4.2.1 The MIPS R3000 and R3010 pro cessors. : :: ::: :: 18 5 Sup ercomputers 19 5.1 Parallel architectures : :: :: :: ::: :: :: :: :: ::: :: 19 5.1.1 Evaluation of an integral : ::: :: :: :: :: ::: :: 20 5.1.2 Molecular dynamics :: :: ::: :: :: :: :: ::: :: 21 5.1.3 Typ es of parallel computers :: :: :: :: :: ::: :: 22 5.2 A virtual parallel computer : :: ::: :: :: :: :: ::: :: 26 6 Further reading 27 References 28 CUBoulder : HPSC Course Notes ii Overview Trademark Notice Convex, Convex Exemplar, are trademarks of Convex Computer Corp ora- tion. Cray, Cray-1, Cray Y-MP, Cray C90, are trademarks of Cray Research, Inc. DEC, DECstation, DECstation 5000, DEC 5000/240, DEC 10000-660 AXP, DEC PXG 3D Accelerator, DEC VAX 11/780, PXG, PXGTurb o+, VAX are trademarks of Digital Equipment Corp oration. HP 9000/735 is a trademark of Hewlett-Packard Company. Intel i860, Intel iPSC/2, Intel iPSC/860, Intel Delta, Intel Paragon are trade- marks of Intel Corp oration.
    [Show full text]
  • No Slide Title
    Cap 1 Introduction Introduction What is Parallel Architecture? Why Parallel Architecture? Evolution and Convergence of Parallel Architectures IC/Unicamp Fundamental Design Issues – Adaptado dos slides da editora por Mario Côrtes Mario por editorada slides dos Adaptado pag 1 2 What is Parallel Architecture? A parallel computer is a collection of processing elements that cooperate to solve large problems fast Some broad issues: • IC/Unicamp Resource Allocation: – – how large a collection? – how powerful are the elements? – how much memory? • Data access, Communication and Synchronization – how do the elements cooperate and communicate? – how are data transmitted between processors? – what are the abstractions and primitives for cooperation? • Performance and Scalability – how does it all translate into performance? – how does it scale? (satura o crescimento ou não) Adaptado dos slides da editora por Mario Côrtes Mario por editorada slides dos Adaptado 3 1.1 Why Study Parallel Architecture? Role of a computer architect: To design and engineer the various levels of a computer system to maximize performance and programmability within limits of technology cost IC/Unicamp and . – Parallelism: • Provides alternative to faster clock for performance (limitações de tecnologia) • Applies at all levels of system design (ênfase do livro: pipeline, cache, comunicação, sincronização) • Is a fascinating perspective from which to view architecture • Is increasingly central in information processing Côrtes Mario por editorada slides dos Adaptado pag 4 4
    [Show full text]