Saurashtra University Library Service

Total Page:16

File Type:pdf, Size:1020Kb

Saurashtra University Library Service View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Etheses - A Saurashtra University Library Service Saurashtra University Re – Accredited Grade ‘B’ by NAAC (CGPA 2.93) Mehta, Mihir J., 2006, “Performance measurement and analysis of PC based cluster server using SET of Architecture and modeling a scalable High performance cluster”, thesis PhD, Saurashtra University http://etheses.saurashtrauniversity.edu/id/eprint/339 Copyright and moral rights for this thesis are retained by the author A copy can be downloaded for personal non-commercial research or study, without prior permission or charge. This thesis cannot be reproduced or quoted extensively from without first obtaining permission in writing from the Author. The content must not be changed in any way or sold commercially in any format or medium without the formal permission of the Author When referring to this work, full bibliographic details including the author, title, awarding institution and date of the thesis must be given. Saurashtra University Theses Service http://etheses.saurashtrauniversity.edu [email protected] © The Author “PERFORMANCE MEASUREMENT AND ANALYSIS OF PC BASED CLUSTER SERVER USING SET OF ARCHITECTURE AND MODELING A SCALABLE HIGH PERFORMANCE CLUSTER” A Thesis Submitted to SAURASHTRA UNIVERSITY, RAJKOT For the degree of Doctor of Philosophy In Computer Science and Application By MIHIR JITUBHAI MEHTA Under the Guidance of DR. N. N. JANI Professor and Head, Department of Computer Science, Saurashtra University, Rajkot. April 2006 CERTIFICATE This is to certify that the contents of this Ph.D. thesis entitled ³High Performance Computing Server and Network, Study and Analysis of Performance and Performance Tuning´ are the original research work of Mr. Mihir Jitubhai Mehta and leads to the advancement in the knowledge of High Performance Computing. The Thesis has been prepared under my supervision. I further certify that the work has not been submitted either partly or fully to any other university or institution for the award of any degree. Date: Dr. N. N. Jani Place: Professor and Head, Dept. of Computer Science, Saurashtra University, Rajkot. STATEMENT I hereby declare that on the work incorporated in the present thesis is the original and has not been submitted to any other university or institution for the award of Diploma or degree. I further declare that the results presented in the thesis, consideration made therein, contribute in general to the advancement of knowledge in Computer Science and in particular to ³PERFORMANCE MEASUREMENT AND ANALYSIS OF PC BASED CLUSTER SERVER USING SET OF ARCHITECTURE AND MODELING A SCALABLE HIGH PERFORMANCE CLUSTER´ in context of High Performance Computing and Cluster Servers. Date: Mr. Mihir Jitubhai Mehta DEDICATED TO MY BROTHER, SISTER & PARENTS MY WIFE ACKNOWLEDGEMENT It is a matter of immense pleasure for me to express my thankful gratitude to all those who have helped me in different ways. I am at a loss to find words to express my deepest gratitude to my honorable and learned guide Dr. N. N. Jani for his invaluable guidance, constructive criticism and encouragement throughout the course of my study despite he being unsparingly preoccupied with his own research work and guidance. I acknowledge that he kept me on track gently but firmly. I am thankful to all staff of the Department of Computer Science, Saurashtra University, constantly took interest in my pursuit and facilitated it in every possible way. Their genuine interest in my career and warm wishes constitute a great part my efforts. I Specially thankful to my co-research scholars for their kind co-operation. I shall be ever grateful to the Research Scholar - Dr. Bhavna Sohan Thakkar, and staff of MVM Computer Education Centre, Rajkot for their co-operation and goodwill. I am also thankful to Mr. Mayur Raval, Mr. Apurva Pandya, Shri K. H. Atkotia, Dhaval Kathriya for their moral support in my work. No words would ever be able to express my feelings for my wife, parents, my wife my brother and sister, all my relatives and friends who constantly inspired me. Without their constant concern for my career advancement it would have been nearly impossible for me to complete the stupendous task. Finally I wish to make devout supplication to the Divine Spirit without whose blessings this task would not have been accomplished. Mr. Mihir J. Mehta. CONTENTS ACKNOWLEDGEMENT I LIST OF FIGURES, TABLE & GRAPH II 1 Introduction to High Performance Computing 001 1.1 Research Objective 1.2 Statement of Title 1.3 Supercomputers – Current state of the art 1.4 HPC – Indian Contribution 1.5 Future scope for development 2 Parallelisation Technology 028 2.1 Introduction to Parallel computing 2.2 Parallel Computer System 2.3 Parallel Programming Paradigms 2.4 Parallel Programming Languages 2.5 Performance Issues 2.6 Clusters – An Economic approach to High Performance System 3 Cluster Computing Technology 110 3.1 Introduction 3.2 Cluster Technology 3.3 Cluster Classifications 3.4 Study of Commercial High Performance Systems 3.5 Cluster Features 4 Modelling a Scalable and Low cost Parallel architecture 180 4.1 OpenMosix – Technology 4.2 Cluster O. S. 4.3 High Performance Network Technology 4.4 Cluster Management Software – Integration of Cluster O.S. and Cluster applications 4.5 Modelling a Cluster Architecture 5 Implementation of Designed cluster architecture, Performance measurement and analysis 242 5.1 Network Connectivity overview 5.2 Cluster Installation 5.3 Resource Sharing Algorithms 5.4 Process Migration 5.5 Workload Management 5.6 Implementation of the developed application on Cluster System 5.7 Performance measurement & Analysis 5.8 Conclusion LIST OF FIGURES Chapter 1 1. Potential of HPC Chapter 2 2. Potential of Parallel Machine 3. UMA Model 4. NUMA Model 5. COMA Model 6. Messaging Passing Multicomputer 7. Vector Processor Model 8. PRAM Model 9. Shared data approach 10. Message passing approach 11. Architecture of PVM 12. A cluster configuration Chapter 3 13. Throughput Cluster 14. Capability Cluster 15. JeevaHA system configuration 16. JeevaHA Architectures 17. Architecture of Beo-wulf cluster 18. Windows cluster server block diagram 19. Ethernet Cluster 20. SCI Topology 21. KVM Switch Chapter 4 22. openMosixView main window 23. Process Monitor 24. Migration Monitor 25. openMosix analyzer Chapter 5 26. Proposed Model for open-Mosix cluster LIST OF TABLES Chapter 1 1. HPC applications 2. HPC Users in India Chapter 2 3. Memory Classification 4. HPC Users in India 5. Comparison of various Multiprocessor Architectures Chapter 5 6. mosctl – detail options 7. mosrun – detail options 8. mpstat – output 9. iostat – output 10. sar – output 11. top - output 12. netstat – output 13. Performance measurement of 1 server node 14. Performance measurement of 1 + 1 server node 15. Performance measurement of 1 + 2 server node 16. Performance measurement of 1 + 3 server node LIST OF GRAPHS Chapter 5 1. Performance measurement of 1 server node 2. Performance measurement of 1 + 1 server node 3. Performance measurement of 1 + 2 server node 4. Performance measurement of 1 + 3 server node 5. Performance analysis of suggested cluster setup Chapter: 1 IInnttrroodduuccttiioonn ttoo HHiigghh PPeerrffoorrmmaannccee CCoommppuuttiinngg 1 TABLE OF CONTENTS 1.1 Research Objective 1.2 Statement of Title 1.3 Supercomputers ± Current state of the art 1.4 HPC ± Indian Contribution 1.5 Future scope for development 2 1.1 RESEARCH OBJECTIVE The purpose of this thesis is to report on the work carried out by the researcher over the period of his doctoral study. The subject of supercomputing and its subset parallel computing has been researched extensively and is a well- established field. Methodology aside, there will always be an upper limit to how much computation a single processor- computing device can perform in a µreasonable¶ time. In the simple Neumann model, limits are imposed by processor clock rate, memory size and the storage capacity. The main objectives of my study are: · Present an overview of parallel computing and review the various strategies for parallelism. · Study the high performance computing technology in depth. · Study the Detailed information regarding Parallelisation technologies and architecture etc. · Understanding the functionality of cluster system and analysis of cluster, current scenario of the domain, detailed information and practical experience regarding cluster computing, its architecture, its installation procedures, how to build clusters, its performance evaluation etc« 3 · A project study of various clustering project like Beowulf, Jeehva Cluster, Microsoft Cluster Server, Sun Cluster etc« · Study the concepts of the OpenMosix Cluster architecture; suggest the model of cluster hardware. · Investigate its performance characteristics of the parallel application, principally to evaluate the designed model, its scalability and cost. · Describe further research activities that can expand, utilize and benefit from this work. 4 1.2 STATEMENT OF THE TITLE ³Performance measurement and analysis of PC based Cluster Server using set of architecture and modeling a Scalable High Performance Cluster´ The demand for parallel processing is high. Application areas with computational needs that require parallel processing include biomedical research, fluid dynamics, global climate modeling, molecular modeling, nuclear test simulations, astronomic simulations, and many others. Among them the interesting and challenging field for me is
Recommended publications
  • New CSC Computing Resources
    New CSC computing resources Atte Sillanpää, Nino Runeberg CSC – IT Center for Science Ltd. Outline CSC at a glance New Kajaani Data Centre Finland’s new supercomputers – Sisu (Cray XC30) – Taito (HP cluster) CSC resources available for researchers CSC presentation 2 CSC’s Services Funet Services Computing Services Universities Application Services Polytechnics Ministries Data Services for Science and Culture Public sector Information Research centers Management Services Companies FUNET FUNET and Data services – Connections to all higher education institutions in Finland and for 37 state research institutes and other organizations – Network Services and Light paths – Network Security – Funet CERT – eduroam – wireless network roaming – Haka-identity Management – Campus Support – The NORDUnet network Data services – Digital Preservation and Data for Research Data for Research (TTA), National Digital Library (KDK) International collaboration via EU projects (EUDAT, APARSEN, ODE, SIM4RDM) – Database and information services Paituli: GIS service Nic.funet.fi – freely distributable files with FTP since 1990 CSC Stream Database administration services – Memory organizations (Finnish university and polytechnics libraries, Finnish National Audiovisual Archive, Finnish National Archives, Finnish National Gallery) 4 Current HPC System Environment Name Louhi Vuori Type Cray XT4/5 HP Cluster DOB 2007 2010 Nodes 1864 304 CPU Cores 10864 3648 Performance ~110 TFlop/s 34 TF Total memory ~11 TB 5 TB Interconnect Cray QDR IB SeaStar Fat tree 3D Torus CSC
    [Show full text]
  • HP/NVIDIA Solutions for HPC Compute and Visualization
    HP Update Bill Mannel VP/GM HPC & Big Data Business Unit Apollo Servers © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The most exciting shifts of our time are underway Security Mobility Cloud Big Data Time to revenue is critical Decisions Making IT critical Business needs must be rapid to business success happen anywhere Change is constant …for billion By 2020 billion 30 devices 8 people trillion GB million 40 data 10 mobile apps 2 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP’s compute portfolio for better IT service delivery Software-defined and cloud-ready API HP OneView HP Helion OpenStack RESTful APIs WorkloadWorkload-optimized Optimized Mission-critical Virtualized &cloud Core business Big Data, HPC, SP environments workloads applications & web scalability Workloads HP HP HP Apollo HP HP ProLiant HP Integrity HP Integrity HP BladeSystem HP HP HP ProLiant SL Moonshot Family Cloudline scale-up blades & Superdome NonStop MicroServer ProLiant ML ProLiant DL Density and efficiency Lowest Cost Convergence Intelligence Availability to scale rapidly built to Scale for continuous business to accelerate IT service delivery to increase productivity Converged Converged network Converged management Converged storage Common modular HP Networking HP OneView HP StoreVirtual VSA architecture Global support and services | Best-in-class partnerships | ConvergedSystem 3 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Hyperscale compute grows 10X faster than the total market HPC is a strategic growth market for HP Traditional Private Cloud HPC Service Provider $50 $40 36% $30 50+% $B $20 $10 $0 2013 2014 2015 2016 2017 4 © Copyright 2015 Hewlett-Packard Development Company, L.P.
    [Show full text]
  • CRAY-2 Design Allows Many Types of Users to Solve Problems That Cannot Be Solved with Any Other Computers
    Cray Research's mission is to lead in the development and marketingof high-performancesystems that make a unique contribution to the markets they serve. For close toa decade, Cray Research has been the industry leader in large-scale computer systems. Today, the majority of supercomputers installed worldwide are Cray systems. These systems are used In advanced research laboratories around the world and have gained strong acceptance in diverse industrial environments. No other manufacturer has Cray Research's breadth of success and experience in supercomputer development. The company's initial product, the GRAY-1 Computer System, was first installed in 1978. The CRAY-1 quickly established itself as the standard of value for large-scale computers and was soon recognized as the first commercially successful vector processor. For some time previously, the potential advantagee af vector processing had been understood, but effective Dractical imolementation had eluded com~uterarchitects. The CRAY-1 broke that barrier, and today vect~rization'techni~uesare used commonly by scientists and engineers in a widevariety of disciplines. With its significant innovations in architecture and technology, the GRAY-2 Computer System sets the standard for the next generation of supercomputers. The CRAY-2 design allows many types of users to solve problems that cannot be solved with any other computers. The GRAY-2 provides an order of magnitude increase in performanceaver the CRAY-1 at an attractive price/performance ratio. Introducing the CRAY-2 Computer System The CRAY-2 Computer System sets the standard for the next generation of supercomputers. It is characterized by a large Common Memory (256 million 64-bit words), four Background Processors, a clock cycle of 4.1 nanoseconds (4.1 billionths of a second) and liquid immersion cooling.
    [Show full text]
  • Revolution: Greatest Hits
    REVOLUTION: GREATEST HITS SHORT ON TIME? VISIT THESE OBJECTS INSIDE THE REVOLUTION EXHIBITION FOR A QUICK OVERVIEW REVOLUTION: PUNCHED CARDS REVOLUTION: BIRTH OF THE COMPUTER REVOLUTION: BIRTH OF THE COMPUTER REVOLUTION: REAL-TIME COMPUTING Hollerith Electric ENIAC, 1946 ENIGMA, ca. 1935 Raytheon Apollo Guidance Tabulating System, 1890 Used in World War II to calculate Few technologies were as Computer, 1966 This device helped the US gun trajectories, only a few piec- decisive in World War II as the This 70 lb. box, built using the government complete the 1890 es remain of this groundbreaking top-secret German encryption new technology of Integrated census in record time and, in American computing system. machine known as ENIGMA. Circuits (ICs), guided Apollo 11 the process, launched the use of ENIAC used 18,000 vacuum Breaking its code was a full-time astronauts to the Moon and back in July of 1969, the first manned punched cards in business. IBM tubes and took several days task for Allied code breakers who moon landing in human history. used the Hollerith system as the to program. invented remarkable comput- The AGC was a lifeline for the basis of its business machines. ing machines to help solve the astronauts throughout the eight ENIGMA riddle. day mission. REVOLUTION: MEMORY & STORAGE REVOLUTION: SUPERCOMPUTERS REVOLUTION: MINICOMPUTERS REVOLUTION: AI & ROBOTICS IBM RAMAC Disk Drive, 1956 Cray-1 Supercomputer, 1976 Kitchen Computer, 1969 SRI Shakey the Robot, 1969 Seeking a faster method of The stunning Cray-1 was the Made by Honeywell and sold by Shakey was the first robot that processing data than using fastest computer in the world.
    [Show full text]
  • Supercomputing
    DepartmentDepartment ofof DefenseDefense HighHigh PerformancePerformance ComputingComputing ModernizationModernization ProgramProgram Supercomputing:Supercomputing: CrayCray Henry,Henry, DirectorDirector HPCMP,HPCMP, PerformancePerformance44 May MayMeasuresMeasures 20042004 andand OpportunitiesOpportunities CrayCray J.J. HenryHenry AugustAugust 20042004 http://http://www.hpcmo.hpc.milwww.hpcmo.hpc.mil 20042004 HPECHPEC ConferenceConference PresentationPresentation OutlineOutline zz WhatWhat’sWhat’’ss NewNew inin thethe HPCMPHPCMP 00NewNew hardwarehardware 00HPCHPC SoftwareSoftware ApplicationApplication InstitutesInstitutes 00CapabilityCapability AllocationsAllocations 00OpenOpen ResearchResearch SystemsSystems 00OnOn-demand-demand ComputingComputing zz PerformancePerformance MeasuresMeasures -- HPCMPHPCMPHPCMP zz PerformancePerformance MeasuresMeasures –– ChallengesChallengesChallenges && OpportunitiesOpportunities HPCMPHPCMP CentersCenters 19931993 20042004 Legend MSRCs ADCs and DDCs TotalTotal HPCMHPCMPP EndEnd-of-Year-of-Year ComputationalComputational CapabilitiesCapabilities 80 MSRCs ADCs 120,000 70 13.1 MSRCs DCs 100,000 60 23,327 80,000 Over 400X Growth 50 s F Us 60,000 40 eak G HAB 12.1 P 21,759 30 59.3 40,000 77,676 5,86 0 20,000 4,393 30,770 20 2.6 21,946 1, 2 76 3,171 26.6 18 9 3 6 0 688 1, 16 8 2,280 8,03212 , 0 14 2.7 18 1 47 10 0 1,944 3,477 10 15.7 0 50 400 1,200 10.6 3 4 5 6 7 8 9 0 1 2 3 4 0 199 199 199 199 199 199 199 200 200 200 200 200 FY 01 FY 02 FY 03 FY 04 Year Fiscal Year (TI-XX) HPCMPHPCMP SystemsSystems (MSRCs)(MSRCs)20042004
    [Show full text]
  • NQE Release Overview RO–5237 3.3
    NQE Release Overview RO–5237 3.3 Document Number 007–3795–001 Copyright © 1998 Silicon Graphics, Inc. and Cray Research, Inc. All Rights Reserved. This manual or parts thereof may not be reproduced in any form unless permitted by contract or by written permission of Silicon Graphics, Inc. or Cray Research, Inc. RESTRICTED RIGHTS LEGEND Use, duplication, or disclosure of the technical data contained in this document by the Government is subject to restrictions as set forth in subdivision (c) (1) (ii) of the Rights in Technical Data and Computer Software clause at DFARS 52.227-7013 and/or in similar or successor clauses in the FAR, or in the DOD or NASA FAR Supplement. Unpublished rights reserved under the Copyright Laws of the United States. Contractor/manufacturer is Silicon Graphics, Inc., 2011 N. Shoreline Blvd., Mountain View, CA 94043-1389. Autotasking, CF77, CRAY, Cray Ada, CraySoft, CRAY Y-MP, CRAY-1, CRInform, CRI/TurboKiva, HSX, LibSci, MPP Apprentice, SSD, SUPERCLUSTER, UNICOS, and X-MP EA are federally registered trademarks and Because no workstation is an island, CCI, CCMT, CF90, CFT, CFT2, CFT77, ConCurrent Maintenance Tools, COS, Cray Animation Theater, CRAY APP, CRAY C90, CRAY C90D, Cray C++ Compiling System, CrayDoc, CRAY EL, CRAY J90, CRAY J90se, CrayLink, Cray NQS, Cray/REELlibrarian, CRAY S-MP, CRAY SSD-T90, CRAY T90, CRAY T3D, CRAY T3E, CrayTutor, CRAY X-MP, CRAY XMS, CRAY-2, CSIM, CVT, Delivering the power . ., DGauss, Docview, EMDS, GigaRing, HEXAR, IOS, ND Series Network Disk Array, Network Queuing Environment, Network Queuing Tools, OLNET, RQS, SEGLDR, SMARTE, SUPERLINK, System Maintenance and Remote Testing Environment, Trusted UNICOS, UNICOS MAX, and UNICOS/mk are trademarks of Cray Research, Inc.
    [Show full text]
  • CXML Reference Guide Is the Complete Reference Manual for CXML
    Compaq Extended Math Library Reference Guide January 2001 This document describes the Compaq® Extended Math Library (CXML). CXML is a set of high-performance mathematical routines designed for use in numerically intensive scientific and engineering applications. This document is a guide to using CXML and provides reference information about CXML routines. Revision/Update Information: This document has been revised for this release of CXML. Compaq Computer Corporation Houston, Texas © 2001 Compaq Computer Corporation Compaq, the COMPAQ logo, DEC, DIGITAL, VAX, and VMS are registered in the U.S. Patent and Trademark Office. Alpha, Tru64, DEC Fortran, OpenVMS, and VAX FORTRAN are trademarks of Compaq Information Technologies, L.P. in the United States and other countries. Adobe, Adobe Acrobat, and POSTSCRIPT are registered trademarks of Adobe Systems Incorporated. CRAY is a registered trademark of Cray Research, Incorporated. IBM is a registered trademark of International Business Machines Corporation. IEEE is a registered trademark of the Institute of Electrical and Electronics Engineers Inc. IMSL and Visual Numerics are registered trademarks of Visual Numerics, Inc. Intel and Pentium are trademarks of Intel Corporation. KAP is a registered trademark of Kuck and Associates, Inc. Linux is a registered trademark of Linus Torvalds. Microsoft, Windows, and Windows NT are either trademarks or registered trademarks of Microsoft Corporation in the United States and other countries. OpenMP and the OpenMP logo are trademarks of OpenMP Architecture Review Board. SUN, SUN Microsystems, and Java are registered trademarks of Sun Microsystems, Inc. UNIX, Motif, OSF, OSF/1, OSF/Motif, and The Open Group are trademarks of The Open Group. All other trademarks and registered trademarks are the property of their respective holders.
    [Show full text]
  • Performance Variability of Highly Parallel Architectures
    Performance Variability of Highly Parallel Architectures William T.C. Kramer1 and Clint Ryan2 1 Department of Computing Sciences, University of California at Berkeley and the National Energy Research Scienti¯c Computing Center, Lawrence Berkeley National Laboratory 2 Department of Computing Sciences, University of California at Berkeley Abstract. The design and evaluation of high performance computers has concentrated on increasing computational speed for applications. This performance is often measured on a well con¯gured dedicated sys- tem to show the best case. In the real environment, resources are not always dedicated to a single task, and systems run tasks that may influ- ence each other, so run times vary, sometimes to an unreasonably large extent. This paper explores the amount of variation seen across four large distributed memory systems in a systematic manner. It then analyzes the causes for the variations seen and discusses what can be done to decrease the variation without impacting performance. 1 Introduction The design and evaluation of high performance computers concentrates on in- creasing computational performance for applications. Performance is often mea- sured on a well con¯gured dedicated system to show the best case. In the real environment, resources are not always dedicated to a single task, and system run multiple tasks that may influence each other, so run times vary, sometimes to an unreasonably large extent. This paper explores the amount of variation seen across four large distributed memory systems in a systematic manner.[1] It then analyzes the causes for the variations seen and discusses what can be done to decrease the variation without impacting performance.
    [Show full text]
  • Taking the Lead in HPC
    Taking the Lead in HPC Cray X1 Cray XD1 Cray Update SuperComputing 2004 Safe Harbor Statement The statements set forth in this presentation include forward-looking statements that involve risk and uncertainties. The Company wished to caution that a number of factors could cause actual results to differ materially from those in the forward-looking statements. These and other factors which could cause actual results to differ materially from those in the forward-looking statements are discussed in the Company’s filings with the Securities and Exchange Commission. SuperComputing 2004 Copyright Cray Inc. 2 Agenda • Introduction & Overview – Jim Rottsolk • Sales Strategy – Peter Ungaro • Purpose Built Systems - Ly Pham • Future Directions – Burton Smith • Closing Comments – Jim Rottsolk A Long and Proud History NASDAQ: CRAY Headquarters: Seattle, WA Marketplace: High Performance Computing Presence: Systems in over 30 countries Employees: Over 800 BuildingBuilding Systems Systems Engineered Engineered for for Performance Performance Seymour Cray Founded Cray Research 1972 The Father of Supercomputing Cray T3E System (1996) Cray X1 Cray-1 System (1976) Cray-X-MP System (1982) Cray Y-MP System (1988) World’s Most Successful MPP 8 of top 10 Fastest First Supercomputer First Multiprocessor Supercomputer First GFLOP Computer First TFLOP Computer Computer in the World SuperComputing 2004 Copyright Cray Inc. 4 2002 to 2003 – Successful Introduction of X1 Market Share growth exceeded all expectations • 2001 Market:2001 $800M • 2002 Market: about $1B
    [Show full text]
  • Spring 2018 Commencement Program
    TE TA UN S E ST TH AT I F E V A O O E L F A DITAT DEUS N A E R R S I O Z T S O A N Z E I A R I T G R Y A 1912 1885 ArizonA StAte UniverSity CommenCement And ConvoCAtion ProgrAm Spring 2018 May 7–11, 2018 The NaTioNal aNThem The STar-SpaNgled BaNNer O say can you see, by the dawn’s early light, What so proudly we hailed at the twilight’s last gleaming? Whose broad stripes and bright stars through the perilous fight O’er the ramparts we watched, were so gallantly streaming? And the rockets’ red glare, the bombs bursting in air Gave proof through the night that our flag was still there. O say does that Star-Spangled Banner yet wave O’er the land of the free and the home of the brave? alma maTer ariZoNa STaTe UNiVerSiTY Where the bold saguaros Raise their arms on high, Praying strength for brave tomorrows From the western sky; Where eternal mountains Kneel at sunset’s gate, Here we hail thee, Alma Mater, Arizona State. —Hopkins-Dresskell marooN aNd gold Fight, Devils down the field Fight with your might and don’t ever yield Long may our colors outshine all others Echo from the buttes, Give em’ hell Devils! Cheer, cheer for A-S-U! Fight for the old Maroon For it’s Hail! Hail! The gang’s all here And it’s onward to victory! Students whose names appear in this program are candidates for the degrees listed, which will be conferred subject to completion of requirements.
    [Show full text]
  • Tour De Hpcycles
    Tour de HPCycles Wu Feng Allan Snavely [email protected] [email protected] Los Alamos National San Diego Laboratory Supercomputing Center Abstract • In honor of Lance Armstrong’s seven consecutive Tour de France cycling victories, we present Tour de HPCycles. While the Tour de France may be known only for the yellow jersey, it also awards a number of other jerseys for cycling excellence. The goal of this panel is to delineate the “winners” of the corresponding jerseys in HPC. Specifically, each panelist will be asked to award each jersey to a specific supercomputer or vendor, and then, to justify their choices. Wu FENG [email protected] 2 The Jerseys • Green Jersey (a.k.a Sprinters Jersey): Fastest consistently in miles/hour. • Polka Dot Jersey (a.k.a Climbers Jersey): Ability to tackle difficult terrain while sustaining as much of peak performance as possible. • White Jersey (a.k.a Young Rider Jersey): Best "under 25 year-old" rider with the lowest total cycling time. • Red Number (Most Combative): Most aggressive and attacking rider. • Team Jersey: Best overall team. • Yellow Jersey (a.k.a Overall Jersey): Best overall supercomputer. Wu FENG [email protected] 3 Panelists • David Bailey, LBNL – Chief Technologist. IEEE Sidney Fernbach Award. • John (Jay) Boisseau, TACC @ UT-Austin – Director. 2003 HPCwire Top People to Watch List. • Bob Ciotti, NASA Ames – Lead for Terascale Systems Group. Columbia. • Candace Culhane, NSA – Program Manager for HPC Research. HECURA Chair. • Douglass Post, DoD HPCMO & CMU SEI – Chief Scientist. Fellow of APS. Wu FENG [email protected] 4 Ground Rules for Panelists • Each panelist gets SEVEN minutes to present his position (or solution).
    [Show full text]
  • P U R D U E S a L a R I
    THE EXPONENT, 2018 PURDUE SALARIES PAGE 1 !"#$ %&'(&)*+,-,'.)+ !"#$%&'() *%$+,) !-./) 0+-&1) 2/+) !34%5+5$) 4&#5$() $/+) $%$-6) .%74+5(-$#%5) 8%&) -66) 5%59($:"+5$) ;5#<+&(#$0) +746%0++(=) 2/#() &+4%&$) #() &+>+.$#<+) %8) +-&5#5?() 8%&) $/+) @ABC) .-6+5"-&)0+-&1)-()4&%<#"+")D0)E:&":+);5#<+&(#$0)#5).%746#-5.+)F#$/)4:D6#.)&+.%&"()6-F(=) 2/#() #((:+) .%5$-#5() 6-($) 5-7+() G9HI) J9K) F#66) D+) #5.6:"+") #5) L%5"-0'() +"#$#%5= PAGE 2 THE EXPONENT, 2018 PURDUE SALARIES How Purdue’s head coach salaries stack up Mitch Daniels STAFF REPORTS gets a raise !"##$%$& '()#$($*& *(+,-$& ."+& %+$'(/ STAFF REPORTS 0$**& 1',#2& ,0& )"3$*& ".& $-$0(4'##2& 3#'2,0%&'&*3"+(&."+&()$,+&5'+$$+6&74(& F,()&()$&?@AB&*'#'+2&%4,1$&"4(6&,(&,*& ()$,+&5"'5)$*&8'9$&'&)$'#()2&#,-,0%& 0":&+$3"+($1&()'(&<4+14$&<+$*,1$0(& ".&()$,+&":0&8$0("+,0%&()$*$&2"40%& Q,(5)& W'0,$#*& 8'9$*& NB[?6S\\& ,0& 3#'2$+*; ()$& 5'#$01'+& 2$'+;& M$+$P*& )":& W'0/ <4+14$& )$'1& .""(7'##& 5"'5)& =$..& ,$#*P&*'#'+2&5"83'+$*&("&"()$+&40,-$+/ >+")8& 8'2& 0$$1& ("& (),09& '7"4(& *,(,$*&,0&()$&'+$'&'01&,0&()$&>,%&C$0; )":& )$& 3#'0*& ("& *3$01& ),*& ?@AB& C"& *('+(6& #$(P*& 5"83'+$& W'0,$#*P& *'#'+2&'.($+&8'9,0%&()$&1$5,*,"0&("& *'#'+2&.+"8&()$&3'*(&()+$$&2$'+*;&b0& +$8',0& '(& <4+14$;& C)$+$& :'*& '& #"(& ?@AK6&W'0,$#*&:'*&8'9,0%&'&+$3"+(/ ".& *3$54#'(,"0& ,0& D"-$87$+& ()'(& $1& NK[@6S\@6& ,0& ?@AU& W'0,$#*& 3'2& >+")8&8'2&)$'1&*"4()&("&),*&)"8$& +',*$1& ("& '& *(4+12& NJ?B6KU\& '01& ,0& *('($&".&E$0(45926&74(&)$&1$5,1$1&("& ?@AJ&)$&*':&'&N[J6B\\&+',*$&("&8'9$& *('2&,0&F$*(&G'.'2$(($; NJJU6[UB;&D":&()$&<4+14$&3+$*,1$0(&
    [Show full text]