Computing: from Your Desktop PC to Parallel Supercomputers a User-Friendly Introduction

Total Page:16

File Type:pdf, Size:1020Kb

Computing: from Your Desktop PC to Parallel Supercomputers a User-Friendly Introduction Politecnico di Torino PhD School High Performance (Scientific) Computing: from your desktop PC to parallel supercomputers A user-friendly introduction Michele Griffa PhD student, XIX cycle Dept. of Physics, Politecnico di Torino corso Duca degli Abruzzi 24, 10129, Torino (Italy) and Bioinformatics and High Performance Computing Laboratory Bioindustry Park of Canavese via Ribes 5, 10010, Colleretto Giacosa (Italy) E-mail: [email protected] Personal Web site: http://www.calcolodistr.altervista.org 16/10/06 Michele Griffa - High Performance (Scientific) Computing Table of contents 1 Introduction 2 High Performance Computing (HPC): what does it mean and why do we need it ? 3 Computers architectures 3.1 The Von Neumann's architecture and its bottlenecks 3.2 CPU built-in parallelism: technological improvements 3.2.1 Bit-parallel memory and arithmetic 3.2.2 Implementation of I/O-dedicated processors 3.2.3 Memory interleaving 3.2.4 Hierarchies of memories 3.2.5 Multitasking 3.2.6 Instructions look-ahead 3.2.7 Multiple functions units 3.2.8 Pipelining 3.2.9 Floating point arithmetic: SSE technology 3.2.10 CPU design “philosophies”: past and future sceneries 3.2.10.1 CISC and RISC architectures 3.2.10.2 VLIW (Very Long Instruction Word) CPUs: 64-bits architectures 3.2.10.3 SMP (Symmetric Multiprocessor) and Multi-Cores computers 3.3 Parallel (multiprocessor) computers 3.3.1 SIMD systems 3.3.2 MIMD systems 3.3.2.1 Shared-memory architectures 3.3.2.2 Distributed-memory architectures 3.3.3 Interconnetion issues 3.4 Grid Computing: the future of HPC ? 4 1st level of HPC: optimization of scalar codes (squeezing the hardware of your desktop PC !) 4.1 Profiling your programs 4.1.1 Example of software for performance analysis 4.2 Source-code level optimization 4.3 Assembly-code level optimization by compilers 4.4 The Intel Fortran-C/C++ Linux compiler for x86/x86_64 architectures as an example of optimal compiler for Intel architectures 5 2nd level of HPC: parallel programming for parallel computers 5.1 Parallel programming paradigms 5.1.1 Message passing paradigm 5.1.1.1 MPI libraries 5.1.2 Shared-memory systems paradigms 5.1.2.1 OpenMP 6 Concluding remarks and aknowledgements 7 Bibliography 8 List of useful Web sites Michele Griffa - High Performance (Scientific) Computing (3) (1) (2) (4) (5) Group picture of the teachers (red numbers) and participants to the 13th Cineca Summer School on Parallel Computing. Instructors: (1) Sigismondo Boschi, (2) Carlo Cavazzoni, (3) Cristiano Calonaci, (4) Claudio Gheller, (5) Gerardo Ballabio. Giovanni Erbacci and Chiara Marchetto (the other teachers) were not present in this photo. Green arrow (and green t-shirt): me. Many thanks to C. Calonaci and Cineca Web Administrators for having recovered this photograph. (1) (2)(6) (3) (4) (5) Group picture of the teachers (red numbers) and participants to the 2nd Caspur Summer School on Advanced HPC. Instructors: (1) Federico Massaioli, (2) Giorgio Amati, (3) Gabriele Mambrini, (4) Roberto Verzicco, (5) Jack Dongarra. (6) Elena ?????. The other teachers of the School were not present in this photo. Green arrow (but white t-shirt, this time): me. Michele Griffa - High Performance (Scientific) Computing 1. Introduction This document is a brief and incomplete report on the art of High Performance (Scientific) Computing (HPC) targeted to Scientists (certainly not to Computer Scientists or Computational Scientists). Its main objective is to give the interested non-specialist reader an overview on how to improve the performance of his/her computing codes used in his/her scientific day-work. Performance does not mean only speed of execution. Indeed, high performance is concerned with the trade-off between speed, machine workload, volatile/hard memory occupation. This report should show that for improving the performance of one's scientific computing code it is not necessary or sufficient to have high performance hardware facilities: algorithms design, methods and approaches in the implementation of the code with a programming language, customization of the code according to hardware features are important issues yet. Section 2 presents a brief introduction to the world of HPC, with an example of real-world computing problem that for being solved requires computing resources that nowadays are still not available. This is an example of grand-challenge problem and it is useful in showing the importance of HPC in every-days human activities. Section 3 is completely dedicated to computers architectures with a specific logical flow that starts from the basic ingredients of the Von Neumann architecture (the basic logical model for a sequential electronic computer) and arrives at the main architectures of nowadays parallel supercomputers. The intermediate path covers, in a chronological and logical way, the main important steps that has led from single desktop PCs to parallel architectures, with a particular attention to the hardware technological solutions that have tried to introduce parallelism also within the way of operating of a single CPU. Section 3.1 is dedicated to the Von Neumann architecture of a serial computer with a single processing unit (CPU) and its bottlenecks. Section 3.2 is a temptative and incomplete presentation of the evolution of CPU architectures and technologies that have tried to improve their performances, with a stress on technological solutions to implement parallelism within a single processing unit. This choice of concentrating the attention on such issue is in contrast with usual treatments and presentations within non-specialized mass-media contexts, which tend to emphasize the great and powerful increase of CPU speed in terms of number of clock cycles per unit of time (clock frequency). As it might be shown in Section 3.2, the fantastic improvements in CPU performances are not only based on the increase in CPU speed but also on other technical solutions to exploit parallelism. The famous Moore's law, which has seemed to govern the developments of microelectronics in the last three decades and to be at the basis of every leap forward in computing power, is not all the world ! This fact has been getting very clear to hardware and software manufacturers in the last years: physical limits to device integration at the micron scales have been reached, although nanotechnologies promise to beat new limits. However, parallelism seems nowadays to be the only possible solution to maintain a positive rate for computing power. Section 3.3 is dedicated to multiprocessor computers, with a brief overview of the different ypes of architectures and technological solutions to realize them. This part concludes with a brief presentation of Grid Computing, as a kind of parallel computing solution, its promises and its limits. Section 4 and 5 are entirely dedicated to HPC with the point of view of a user who wants to obtain the best performance from the facilities he/she has, being them a single commodity PC or a big parallel supercomputer. Section 4 consists in a collection of treatments and tricks to improve the performance of one's computing codes on a single desktop PC, trying to show that off-the-shelf CPUs have many resources that could be exploited to obtain great results in scientific computing. HPC can Just be obtained, firstly, for serial codes running on off-the- shelf desktop PCs: the report considers some aspects of code optimization, using all the resources of the serial PC. In Section 4.2, some general rules are considered in order to improve the performance of the serial code according to general features common to nowadays CPUs architectures available on the desktop market. I treat here only the x86 architectures, also known as IA_32 (32 bit architectures), of modern CPUs. Since 2004, x86_64 (64 bit architectures) CPUs have been available on the high-level desktop market. Nowadays, they are commonly used on all categories of computers, from desktop to mobile, from servers to clusters. 64 bit CPUs compatible with the IA_32 instruction set (firstly introduced by AMD then by Intel, implemented in Pentium IV from the Proconona version) improve performance in fixed point and floating point arithmetics and, more important, can address RAM memories larger than 4 GiBytes, which is the main objective they have been designed for. Although, x86_64 CPUs are the standard in the design of nowadays multiprocessor supercomputers, the optimization of code running of them require additional knowledge of hardware features which are beyond the scope of this report. Anyway, some of the considerations presented below for x86 architectures remain valid for x86_64 ones. In Section 4.3, other types of optimization operations and methods are described, which can be realized not at the source code level but at the Assembly code level, using specific option flags implemented in modern compilers. I consider there and along the whole report the GNU/Linux OS as the standard environment of work and the GCC (GNU Compiler Collection, version 4.1.1) as the standard compiler set therein. This choice is motivated by the fact that GNU/Linux OSs and software are de facto the standard tools for Scientific Computing, particularly for cluster Michele Griffa - High Performance (Scientific) Computing computing. Section 4.4 introduces some features of the Intel Fortran-C/C++ compiler, which is released by Intel freely according to a non-commercial license for personal use. This compiler is optimized for Intel CPU architectures and is a good example of a tool for improving the performance of serial codes according to the specific hardware features of the CPU in use. When scaling issues imply that the resources of a single PC are not sufficient for a real-world execution of the code, it is necessary to implement a parallel version of the code, to be run on multiprocessor computers.
Recommended publications
  • Assessing the Impacts of Changes in the Information Technology R&D Ecosystem: Retaining Leadership in an Increasingly Global Environment
    This PDF is available from The National Academies Press at http://www.nap.edu/catalog.php?record_id=12174 Assessing the Impacts of Changes in the Information Technology R&D Ecosystem: Retaining Leadership in an Increasingly Global Environment ISBN Committee on Assessing the Impacts of Changes in the Information 978-0-309-11882-8 Technology R & D Ecosystem: Retaining Leadership in an Increasingly Global Environment; National Research Council 204 pages 6 x 9 PAPERBACK (2009) Visit the National Academies Press online and register for... Instant access to free PDF downloads of titles from the NATIONAL ACADEMY OF SCIENCES NATIONAL ACADEMY OF ENGINEERING INSTITUTE OF MEDICINE NATIONAL RESEARCH COUNCIL 10% off print titles Custom notification of new releases in your field of interest Special offers and discounts Distribution, posting, or copying of this PDF is strictly prohibited without written permission of the National Academies Press. Unless otherwise indicated, all materials in this PDF are copyrighted by the National Academy of Sciences. Request reprint permission for this book Copyright © National Academy of Sciences. All rights reserved. Assessing the Impacts of Changes in the Information Technology R&D Ecosystem: Retaining Leadership in an Increasingly Global Environment Committee on Assessing the Impacts of Changes in the Information Technology Research and Development Ecosystem Computer Science and Telecommunications Board Division on Engineering and Physical Sciences Copyright © National Academy of Sciences. All rights reserved. Assessing the Impacts of Changes in the Information Technology R&D Ecosystem: Retaining Leadership in an Increasingly Global Environment THE NATIONAL ACADEMIES PRESS 500 Fifth Street, N.W. Washington, DC 20001 NOTICE: The project that is the subject of this report was approved by the Gov- erning Board of the National Research Council, whose members are drawn from the councils of the National Academy of Sciences, the National Academy of Engi- neering, and the Institute of Medicine.
    [Show full text]
  • Benchmarking Semiconductor Lithography Equipment Development & Sourcing Practices Among Leading-Edge U.S
    Benchmarking Semiconductor Lithography Equipment Development & Sourcing Practices Among Leading-Edge U.S. Manufacturers by Charles N. Pieczulewski B.S. Materials Science & Engineering, Northwestern University (1992) Submitted to the Department of Materials Science and Engineering in Partial Fulfillment of the Requirements for the Degrees of Master of Science in Materials Science and Engineering and Master of Science in Technology and Policy at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY June 1995 © Charles N. Pieczulewski 1995. All rights reserved The author hereby grants to MIT permission to reproduce and to distribute publicly copies of this thesis document in whole or in part. Signature of Author C\ THA I ;-T I May 12, 1995 Certified by Professor Charles H. Fine / A Thesis Supervisor Certified by Professor Joel Clark ialsScience and Engineering chnolpgv and Policy Program Accepted by Pf4ssor Richarae Neufville Chairman, Technologyand Policy Program Accepted by Cal V. Thompson II MASSACHUSETTS INSTITUTE Professor of Electronic Materials OF TECHNOLOGY Chair, Departmental Committee on Graduate Students JUL 201995 LIBRARIES as -1- Benchmarking Semiconductor Lithography Equipment Development & Sourcing Practices Among Leading-Edge U.S. Manufacturers Charles N. Pieczulewski ABSTRACT The semiconductor lithography equipment industry has evolved to where knowledge of technology alone is insufficient to survive in the market. An interdisciplinary set of skills has become necessary to fully comprehend the dynamics of the lithography industry. Understanding the fundamental technology, the management issues in a manufacturing equipment market, and the role of industry-sponsored consortia are all critically important to the lithography industry. A dramatic shift in the semiconductor lithography equipment market occurred in the mid-1980s which sparked a furry among political circles in the United States.
    [Show full text]
  • NASA Space Engineering Research VLSI Systems Design University Of
    NASA-CR-194035 NASA Space Engineering Research Center for VLSI Systems Design University of Idaho Grant NAGW-1406 / p' -3 3 - c/_J/ Annum Review January, 1991 (NASA-CR-194035) NASA SPACE N94-14325 ENGINEERING RESEARCH CENTER FOR VLSI SYSTEMS DESIGN Annua| Report (Idaho Univ.} 80 p Unclas G3/33 0185992 Contents 1 1 Summary 1.1 Published Papers ................................. 1 1.2 Research Applications to NASA ......................... 1 1.3 Student Involvement ............................... 1 2 1.4 Outside Support ............................. - • • • 1.5 Research Results ................................. 2 1.6 Outreach Program ................................ 5 2 VLSI Project Status 6 2.1 Project Description ................................ 6 Publications 7 3.1 List of Published Papers ............................. 7 3.2 NASA VLSI Symposium Papers ........................ 8 3.3 Thesis and Dissertation Publications ...................... 9 11 4 NASA SERC Symposium on VLSI Design 14 5 Resource Support in Last 12 Months 6 Outreach 14 6.1 Undergraduate NASA Scholars ......................... 14 6.2 Graduate Students ................................ 15 6.3 High School Teacher Program .......................... 15 17 7 Research Report 7.1 CCSDS Encoder for Space Applications .................... 17 7.2 SEU Hardening of CMOS Memory Circuits .................. 18 7.3 SEU Immune Logic Family ........................... 19 7.4 NOVA: A New Multi-Level Logic Simulator .................. 20 7.5 A Sequence Invariant State Machine Compiler ................. 22 7.6 Schematic Driven Layout for the Custom VLSI Design Environment .... 23 7.7 Statistical Design for Microwave Systems ................... 24 7.8 Statistical Interpolation of FET Data Base Measurements .......... 25 7.9 Modeling and Sensitivity of the GaAs HEMT Transistor ........... 25 7.10 Real Time Synthetic Aperture Radar Imaging: A Model and Design using VLSI Correlators ................................
    [Show full text]
  • A Cross-Cultural VLSI Design Project
    Session 3232 A Cross-Cultural VLSI Design Project David Harris Tayfun Akin Harvey Mudd College Middle East Technical University Claremont, CA Ankara, TURKEY Abstract Harvey Mudd College (HMC) and the Middle East Technical University (METU) have conducted a joint cross-cultural Very Large Scale Integration (VLSI) design course supported by a grant from the Mellon Foundation. In the spring of 2002, three teams of two American students HMC enrolled in E158 (Introduction to CMOS VLSI Design) worked with teams of two Turkish students from METU who had taken a VLSI course in the previous semester. The teams collaborated across cultures and time zones using email, web pages, chat, videoconferencing, and the telephone. At first students preferred voice communication to get to know each other; as the projects progressed, email and chat reduced difficulties with spoken English. Each team completed the design of a MOSIS TinyChip. One of the designs, a FIR filter, was fabricated and tested to operate as intended. This paper describes the logistics of running the cross-cultural VLSI design project course. Students found that the extra communication effort roughly offset the time savings of splitting the project four ways instead of two. However, they enjoyed the experience working with colleagues from another country and of learning to manage a complex engineering task across multiple sites. They would have liked to travel and meet their counterparts, but insufficient funding was available. The project is being repeated in the spring of 2003. Introduction Engineering is a heavily team-based profession. Integrated circuit design has become so complex that the teams are very large and draw on geographically distributed specialists.
    [Show full text]
  • Copyright © 1991, by the Author(S). All Rights Reserved
    Copyright © 1991, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. AUTOMATED DESIGN MANAGEMENT USING TRACES by Andrea Casotto Memorandum No. UCB/ERL M91/22 15 March 1991 AUTOMATED DESIGN MANAGEMENT USING TRACES by Andrea Casotto Memorandum No. UCB/ERL M91/22 15 March 1991 ELECTRONICS RESEARCH LABORATORY College of Engineering University of California, Berkeley 94720 AUTOMATED DESIGN MANAGEMENT USING TRACES by Andrea Casotto Memorandum No. UCB/ERL M91/22 15 March 1991 ELECTRONICS RESEARCH LABORATORY College of Engineering University of California, Berkeley 94720 Automated Design Management Using Traces Andrea Casotto Department of Electrical Engineering and Computer Sciences, University of California, Berkeley. Abstract Theproductivity ofmodern CAD systems can beincreased with alayer ofsoftware, called the "automatic design manager," whose goal is to provide services such as automatic sequencing and scheduling of the tools, coordination of team design and tracking of the design activity for documentationpurposes. An automatic management system for CAD is proposed, based on the idea that CAD tools can leave a"trace" of their execution. The trace is represented as a bipartite directed and acyclic graph in which the nodes represent either design data orCAD transactions. The trace is both arecord ofthe design activity and agraph representing the dependencies among the design objects.
    [Show full text]
  • Business Groups As Natural States
    Department of Economics Working Paper Series Disintermediation: the Rise of the Personal Computer and the Internet in the Late Twentieth Century by Richard N. Langlois University of Connecticut Working Paper 2021-12 July 2021 365 Fairfield Way, Unit 1063 Storrs, CT 06269-1063 Phone: (860) 486-3022 Fax: (860) 486-4463 http://www.econ.uconn.edu/ This working paper is indexed in RePEc, http://repec.org Disintermediation: the Rise of the Personal Computer and the Internet in the Late Twentieth Century Richard N. Langlois [email protected] Department of Economics The University of Connecticut Storrs, CT 06269-1063 USA July 2021 ABSTRACT This paper is an excerpt from a larger book project called The Corporation and the Twentieth Century, which chronicles and interprets the institutional and economic history – the life and times, if you will – of American business in the twentieth century. This excerpt details the history of the personal computer industry and the Internet. It highlights the process of entrepreneurship and decentralized learning in these industries, and it considers the role of industrial and trade polices (in both the U. S. and Japan) in semiconductors and the development of the Internet. The excerpt ends with a consideration of U. S. v. Microsoft at the close of the century. JEL: D23, F14, K21, L26, L4, L52, L63, N62, N82, O3, P12, P 14, P16 Keywords: Innovation; technological change; entrepreneurship; industrial policy; antitrust. Comments solicited. Please do not cite without permission of the author. Long before the 1960s, what was to become Silicon Valley had been built by hobbyists, tinkerers, radicals, and utopians.1 Between the wars, the San Francisco area was a haven for amateur radio hobbyists, driven in part by the nearby naval and maritime facilities that called for and generated skills in radio.
    [Show full text]
  • MIPS Oral History Panel Session 1: Founding the Company
    MIPS Oral History Panel Session 1: Founding the Company John Hennessy, John Moussouris, Edward (Skip) Stritter, and Larry Weber Moderated by: Michael Malone Recorded: February 18, 2011 Mountain View, California CHM Reference number: X6042.2011 © 2011 Computer History Museum MIPS Oral History Panel Session 1: Founding the Company <Introduction by Edward (Skip) Stritter> Stritter: Hi, my name is Skip Stritter. In 1984, I got together with friends of mine, John Moussouris and John Hennessy, and we founded a new company called MIPS Computer Systems. So in 1984—I’ll try to set the context here-- minicomputers have completely established themselves over the former mainframe dominance, and it’s been a few years now since the IBM PC was announced, 1981, and before that the Apple II and Apple III computers, so personal computers were starting to spread, and starting to have an influence. But personal computers didn’t solve all the problems for intensive computer users. In a new type of computing, a new industry was building up that was called workstations. Workstation companies, such as Sun Microsystems, Apollo Computer, and in 1984, just recently established Silicon Graphics, were starting to introduce new technologies and new ways to think about computing on your desktop, but as powerful as the world of minicomputers and mainframes before it. Up to this point in 1984-- which by the way, is the year that Apple Macintosh was introduced-- workstations and high-end PCs had been based on 16-bit microprocessors, Intel 8086; and by 1984, Intel 286; Motorola 68000, which was used in the Apple Macintosh, were for the day the powerful 16-, almost 32-bit microprocessor generation.
    [Show full text]
  • (Technical University of Lisbon) 1983 MS
    Active Documentation for VLSI Design by Mário Jorge Silva B.S. (Technical University of Lisbon) 1983 M.S. (Technical University of Lisbon) 1987 A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Engineering — Electrical Engineering and Computer Science in the GRADUATE DIVISION of the UNIVERSITY of CALIFORNIA at BERKELEY Committee in charge: Professor Randy H. Katz, Chair Professor A. Richard Newton Professor Alice M. Agogino Active Documentation for VLSI Design Copyright © 1994 by Mário J. Silva 1 Abstract Active Documentation for VLSI Design by Mário J. Silva Doctor of Philosophy in Engineering — Electrical Engineering and Computer Science University of California at Berkeley Professor Randy H. Katz, Chair The main proposal of this dissertation is the integration of design and documentation in a VLSI design system using hypermedia technologies. We introduce and demonstrate the use of active documents in VLSI design. These are multimedia presentations that incorporate invocations to the tools to display and modify the design data. The combination of design and documentation systems offers a new way of creating integrated environments for designers in general, and introduces a new paradigm for VLSI design. We show that it is possible to develop effective design methodologies that enable creating design and docu- mentation in a single thread without intruding in the design process. Active documentation is also used as a new paradigm for creating a common interface to hetero- geneous tools and data used in system design environments. We add a new integration layer that hides heterogeneity by enabling designers to control the flow of information between independent systems through a document manipulation paradigm.
    [Show full text]
  • DOCUMENT RESUME ED 285 570 TITLE 1985 Annual Technical Report
    DOCUMENT RESUME ED 285 570 IR 012 786 TITLE 1985 Annual Technical Report: A Research Program in Computer Technology. July 1984--June 1985. INSTITUT:ON University of Southern California, Marina del Rey. Information Sciences Inst. SPONS AGENCY Advanced Research Projects Agency (DOD), Washington, D.C. REPORT NO ISI/SR-86-170 PUB DATE Dec 86 CONTRACT MDA-903-81-C-0335 NOTE 157p. PUB TYPE Reports - Descriptive (141) -- Reports - Research /Technical (143) EDRS PRICE MF01/PC07 Plus Postage. DESCRIPTORS Computer Graphics; *Computer Networks; *Computer Oriented Programs; *Computer Software; *Expert Systems; *Man Machine Systems; Programing; Programing Languages; Research Projects; *Telecommunications ABSTRACT Summaries of research performed by the Information Sciences Institute at the University of Southern California for the U.S. Departm!nt of Defense Advanced Research Projects Agency in 17 areas are provided in this report: (1) Common LISP framework, an exportable version of the Formalized Software Development (FSD) testbed; (2) Explainable Expert Systems, a framework for building expert systems; (3) Formalized Software Development, a new automated software development paradigm; (4) Mappings, transformations for converting specifications of programs into implementations; (5) Command Graphics, a device-independent graphics system; (6) Internet Concepts Research, protocols for interconnection of computer communication networks; (7) Strategic Computing Information System, an automatic and efficient system for information exchange; (8) Wideband
    [Show full text]
  • The Impact of Dennard's Scaling Theory Sscs NL0107 1/8/07 9:54 AM Page 2
    sscs_NL0107 1/8/07 9:54 AM Page 1 SSCSSSSCSSSSCCSS IEEE SOLID-STATE CIRCUITS SOCIETY NEWS Winter 2007 Vol. 12, No. 1 www.ieee.org/sscs-news The Impact of Dennard's Scaling Theory sscs_NL0107 1/8/07 9:54 AM Page 2 Editor’s Column e appre- ed topic, with background articles that address the theme: ciate all (that is, the ‘original sources’) and (1) “A 30 Year Retrospective on Wof your new articles by experts who Dennard's MOSFET Scaling feedback on our describe the current state of affairs Paper,” by Mark Bohr of Intel first issue in Septem- in technology and the impact of the Corporation; ber, 2006 on “The original papers and/or patents. (2) “Device Scaling: The Treadmill Technical Impact of The theme of the Winter 2007 that Fueled Three Decades of Moore's Law.” With the Winter, 2007 issue is “The Impact of Dennard's Semiconductor Industry Growth,” issue, we are continuing our new Scaling Theory.” by Pallab Chatterjee of i2 Tech- policy of mailing a hard copy of the This issue contains one Research nologies; SSCS News to all 11,500 members. Highlights article: “Analog IC Design (3) “Recollections on MOSFET This issue is the first of four that SSCS at the University of Twente,” by Scaling,” by Dale Critchlow, plans to publish annually (one each Bram Nauta, Head of the IC Design the University of Vermont; in Winter, Spring, Summer, and Fall). Group at the University of Twente, (4) “The Business of Scaling,” by The goal of every issue is to be a The Netherlands.
    [Show full text]
  • The Binary Counter Sequence
    Fast Structured Design of V LSI Circuits 1 J. Gu and K. F. Smith UUCS-TR-88-024 VLSI Group Department of Computer Science University of Utah Salt Lake City, U T 84112 [email protected] A b s t r a c t W e believe that a structured, user-friendly, cost-effective tool for rapid implementation of VLSI circuits which encourages students to participate directly in research projects are the key components in digital integrated circuit (IC) education. In this paper, we introduce our VLSI education activities, with the emphasis on the presentation of Path Programmable Logic (PPL) design methodology, in addition to a short description of a representative stu­ dent project. Students using PPL are able to implement M O S or GaAs VLSI circuits with several thousands to over 100,000 transistors in a few weeks. They have designed and built numerous VLSI architectures and computer systems which play an influential role in various research areas. Our educational activities and the Utah Annual Student VLSI Design Con­ test supported by over a dozen leading American firms have attracted multiple university involvement in recent years. Keywords: Integrated Circuit (IC), IC education, Path Programmable Logic (PPL), teaching experience, Very Large Scale Integration (VLSI). 1This work has been supported in part by D ARPA under Contract No. DAAC-11-84-K-0017, in part by A C M /IEEE 1987-1988 Design Automation Award, and is currently supported in part by A C M /IEEE 1988-1989 Design Automation Award and a University of Utah Research Fellowship. 1 1 Introduction Digital IC education is of widespread and growing interest throughout academic institutions, gov­ ernmental agencies, and industrial organizations.
    [Show full text]
  • Third Semester
    THIRD SEMESTER Course Code : CSPC21 Course Title : Data Structures Number of Credits : 3-0-0-3 Prerequisites(Course code) : - Course Type : PC Objectives To understand the various techniques of sorting and searching To design and implement arrays, stacks, queues, and linked lists To understand the complex data structures such as trees and graphs Unit – I Development of Algorithms - Notations and analysis - Storage structures for arrays - Sparse matrices - Stacks and Queues: Representations and applications. Unit – II Linked Lists - Linked stacks and queues - Operations on polynomials - Doubly linked lists - Circularly linked lists - Dynamic storage management - Garbage collection and compaction. Unit – III Binary Trees - Binary search trees - Tree traversal - Expression manipulation - Symbol table construction - Height balanced trees - Red-black trees. Unit – IV Graphs - Representation of graphs - BFS, DFS - Topological sort - Shortest path problems. String representation and manipulations - Pattern matching. Unit – V Sorting Techniques - Selection, Bubble, Insertion, Merge, Heap, Quick, and Radix sort - Address calculation - Linear search - Binary search - Hash table methods. Outcomes Ability to develop programs to implement linear data structures such as stacks, queues, linked lists, etc. Ability to apply the concept of trees and graph data structures in real world scenarios Ability to comprehend the implementation of sorting and searching algorithms Text Books 1. J. P. Tremblay and P. G. Sorenson, "An Introduction to Data Structures
    [Show full text]