Revolution: Greatest Hits

Total Page:16

File Type:pdf, Size:1020Kb

Revolution: Greatest Hits REVOLUTION: GREATEST HITS SHORT ON TIME? VISIT THESE OBJECTS INSIDE THE REVOLUTION EXHIBITION FOR A QUICK OVERVIEW REVOLUTION: PUNCHED CARDS REVOLUTION: BIRTH OF THE COMPUTER REVOLUTION: BIRTH OF THE COMPUTER REVOLUTION: REAL-TIME COMPUTING Hollerith Electric ENIAC, 1946 ENIGMA, ca. 1935 Raytheon Apollo Guidance Tabulating System, 1890 Used in World War II to calculate Few technologies were as Computer, 1966 This device helped the US gun trajectories, only a few piec- decisive in World War II as the This 70 lb. box, built using the government complete the 1890 es remain of this groundbreaking top-secret German encryption new technology of Integrated census in record time and, in American computing system. machine known as ENIGMA. Circuits (ICs), guided Apollo 11 the process, launched the use of ENIAC used 18,000 vacuum Breaking its code was a full-time astronauts to the Moon and back in July of 1969, the first manned punched cards in business. IBM tubes and took several days task for Allied code breakers who moon landing in human history. used the Hollerith system as the to program. invented remarkable comput- The AGC was a lifeline for the basis of its business machines. ing machines to help solve the astronauts throughout the eight ENIGMA riddle. day mission. REVOLUTION: MEMORY & STORAGE REVOLUTION: SUPERCOMPUTERS REVOLUTION: MINICOMPUTERS REVOLUTION: AI & ROBOTICS IBM RAMAC Disk Drive, 1956 Cray-1 Supercomputer, 1976 Kitchen Computer, 1969 SRI Shakey the Robot, 1969 Seeking a faster method of The stunning Cray-1 was the Made by Honeywell and sold by Shakey was the first robot that processing data than using fastest computer in the world. Neiman Marcus, this fanciful could reason about its actions punched cards, IBM invented the Its legendary inventor, Seymour computer was designed as a sys- without human control. Special world’s first disk drive in its San Cray, designed it in a circular tem for housewives to store their software let it create a “map” Jose laboratories. This stack of shape to keep the intercon- recipes. None were sold. of its environment, which let it 50 disks held about 3.8 MB—or necting wires short, increasing move about freely, even avoiding enough space for one song today. speed. It was used in weather obstacles placed in its path. forecasting, bomb design, and cryptography. REVOLUTION: COMPUTER GAMES REVOLUTION: PERSONAL COMPUTERS REVOLUTION: MOBILE COMPUTING REVOLUTION: NETWORKING & THE WEB Atari Pong Prototype, 1972 Apple-1, 1976 Osborne 1, 1981 Google Server Engine, 1999 Remember Pong, the must-have This simple-looking circuit board This was one of the first com- In its early days, Google was on gift of 1975? See Atari’s original is the genesis of Apple Computer. mercially successful portable a tight budget and built its own Pong prototype and play a live Designed by Apple co-founder minicomputers. How portable servers out of easily available Pong game with a friend! Steve Wozniak, users just needed was it? Lift one for yourself in our commercial computer parts. to add a keyboard, power supply, Mobile Computing gallery! This server engine, one of and a TV set to have a working many identical systems, used computer system. 80 PCs and some networking equipment to answer search requests sent to Google. To learn more visit: computerhistory.org Last Updated: 2017_06 SRI Shakey Cray-1 the Robot Supercomputer Kitchen Computer IBM RAMAC Disk Drive Raytheon Apollo Guidance Computer ENIAC Atari Pong Prototype ENIGMA Apple-1 Osborne 1 Hollerith Electric Google Tabulating Server System Engine To learn more visit: computerhistory.org.
Recommended publications
  • New CSC Computing Resources
    New CSC computing resources Atte Sillanpää, Nino Runeberg CSC – IT Center for Science Ltd. Outline CSC at a glance New Kajaani Data Centre Finland’s new supercomputers – Sisu (Cray XC30) – Taito (HP cluster) CSC resources available for researchers CSC presentation 2 CSC’s Services Funet Services Computing Services Universities Application Services Polytechnics Ministries Data Services for Science and Culture Public sector Information Research centers Management Services Companies FUNET FUNET and Data services – Connections to all higher education institutions in Finland and for 37 state research institutes and other organizations – Network Services and Light paths – Network Security – Funet CERT – eduroam – wireless network roaming – Haka-identity Management – Campus Support – The NORDUnet network Data services – Digital Preservation and Data for Research Data for Research (TTA), National Digital Library (KDK) International collaboration via EU projects (EUDAT, APARSEN, ODE, SIM4RDM) – Database and information services Paituli: GIS service Nic.funet.fi – freely distributable files with FTP since 1990 CSC Stream Database administration services – Memory organizations (Finnish university and polytechnics libraries, Finnish National Audiovisual Archive, Finnish National Archives, Finnish National Gallery) 4 Current HPC System Environment Name Louhi Vuori Type Cray XT4/5 HP Cluster DOB 2007 2010 Nodes 1864 304 CPU Cores 10864 3648 Performance ~110 TFlop/s 34 TF Total memory ~11 TB 5 TB Interconnect Cray QDR IB SeaStar Fat tree 3D Torus CSC
    [Show full text]
  • HP/NVIDIA Solutions for HPC Compute and Visualization
    HP Update Bill Mannel VP/GM HPC & Big Data Business Unit Apollo Servers © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The most exciting shifts of our time are underway Security Mobility Cloud Big Data Time to revenue is critical Decisions Making IT critical Business needs must be rapid to business success happen anywhere Change is constant …for billion By 2020 billion 30 devices 8 people trillion GB million 40 data 10 mobile apps 2 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP’s compute portfolio for better IT service delivery Software-defined and cloud-ready API HP OneView HP Helion OpenStack RESTful APIs WorkloadWorkload-optimized Optimized Mission-critical Virtualized &cloud Core business Big Data, HPC, SP environments workloads applications & web scalability Workloads HP HP HP Apollo HP HP ProLiant HP Integrity HP Integrity HP BladeSystem HP HP HP ProLiant SL Moonshot Family Cloudline scale-up blades & Superdome NonStop MicroServer ProLiant ML ProLiant DL Density and efficiency Lowest Cost Convergence Intelligence Availability to scale rapidly built to Scale for continuous business to accelerate IT service delivery to increase productivity Converged Converged network Converged management Converged storage Common modular HP Networking HP OneView HP StoreVirtual VSA architecture Global support and services | Best-in-class partnerships | ConvergedSystem 3 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Hyperscale compute grows 10X faster than the total market HPC is a strategic growth market for HP Traditional Private Cloud HPC Service Provider $50 $40 36% $30 50+% $B $20 $10 $0 2013 2014 2015 2016 2017 4 © Copyright 2015 Hewlett-Packard Development Company, L.P.
    [Show full text]
  • CRAY-2 Design Allows Many Types of Users to Solve Problems That Cannot Be Solved with Any Other Computers
    Cray Research's mission is to lead in the development and marketingof high-performancesystems that make a unique contribution to the markets they serve. For close toa decade, Cray Research has been the industry leader in large-scale computer systems. Today, the majority of supercomputers installed worldwide are Cray systems. These systems are used In advanced research laboratories around the world and have gained strong acceptance in diverse industrial environments. No other manufacturer has Cray Research's breadth of success and experience in supercomputer development. The company's initial product, the GRAY-1 Computer System, was first installed in 1978. The CRAY-1 quickly established itself as the standard of value for large-scale computers and was soon recognized as the first commercially successful vector processor. For some time previously, the potential advantagee af vector processing had been understood, but effective Dractical imolementation had eluded com~uterarchitects. The CRAY-1 broke that barrier, and today vect~rization'techni~uesare used commonly by scientists and engineers in a widevariety of disciplines. With its significant innovations in architecture and technology, the GRAY-2 Computer System sets the standard for the next generation of supercomputers. The CRAY-2 design allows many types of users to solve problems that cannot be solved with any other computers. The GRAY-2 provides an order of magnitude increase in performanceaver the CRAY-1 at an attractive price/performance ratio. Introducing the CRAY-2 Computer System The CRAY-2 Computer System sets the standard for the next generation of supercomputers. It is characterized by a large Common Memory (256 million 64-bit words), four Background Processors, a clock cycle of 4.1 nanoseconds (4.1 billionths of a second) and liquid immersion cooling.
    [Show full text]
  • Supercomputing
    DepartmentDepartment ofof DefenseDefense HighHigh PerformancePerformance ComputingComputing ModernizationModernization ProgramProgram Supercomputing:Supercomputing: CrayCray Henry,Henry, DirectorDirector HPCMP,HPCMP, PerformancePerformance44 May MayMeasuresMeasures 20042004 andand OpportunitiesOpportunities CrayCray J.J. HenryHenry AugustAugust 20042004 http://http://www.hpcmo.hpc.milwww.hpcmo.hpc.mil 20042004 HPECHPEC ConferenceConference PresentationPresentation OutlineOutline zz WhatWhat’sWhat’’ss NewNew inin thethe HPCMPHPCMP 00NewNew hardwarehardware 00HPCHPC SoftwareSoftware ApplicationApplication InstitutesInstitutes 00CapabilityCapability AllocationsAllocations 00OpenOpen ResearchResearch SystemsSystems 00OnOn-demand-demand ComputingComputing zz PerformancePerformance MeasuresMeasures -- HPCMPHPCMPHPCMP zz PerformancePerformance MeasuresMeasures –– ChallengesChallengesChallenges && OpportunitiesOpportunities HPCMPHPCMP CentersCenters 19931993 20042004 Legend MSRCs ADCs and DDCs TotalTotal HPCMHPCMPP EndEnd-of-Year-of-Year ComputationalComputational CapabilitiesCapabilities 80 MSRCs ADCs 120,000 70 13.1 MSRCs DCs 100,000 60 23,327 80,000 Over 400X Growth 50 s F Us 60,000 40 eak G HAB 12.1 P 21,759 30 59.3 40,000 77,676 5,86 0 20,000 4,393 30,770 20 2.6 21,946 1, 2 76 3,171 26.6 18 9 3 6 0 688 1, 16 8 2,280 8,03212 , 0 14 2.7 18 1 47 10 0 1,944 3,477 10 15.7 0 50 400 1,200 10.6 3 4 5 6 7 8 9 0 1 2 3 4 0 199 199 199 199 199 199 199 200 200 200 200 200 FY 01 FY 02 FY 03 FY 04 Year Fiscal Year (TI-XX) HPCMPHPCMP SystemsSystems (MSRCs)(MSRCs)20042004
    [Show full text]
  • NQE Release Overview RO–5237 3.3
    NQE Release Overview RO–5237 3.3 Document Number 007–3795–001 Copyright © 1998 Silicon Graphics, Inc. and Cray Research, Inc. All Rights Reserved. This manual or parts thereof may not be reproduced in any form unless permitted by contract or by written permission of Silicon Graphics, Inc. or Cray Research, Inc. RESTRICTED RIGHTS LEGEND Use, duplication, or disclosure of the technical data contained in this document by the Government is subject to restrictions as set forth in subdivision (c) (1) (ii) of the Rights in Technical Data and Computer Software clause at DFARS 52.227-7013 and/or in similar or successor clauses in the FAR, or in the DOD or NASA FAR Supplement. Unpublished rights reserved under the Copyright Laws of the United States. Contractor/manufacturer is Silicon Graphics, Inc., 2011 N. Shoreline Blvd., Mountain View, CA 94043-1389. Autotasking, CF77, CRAY, Cray Ada, CraySoft, CRAY Y-MP, CRAY-1, CRInform, CRI/TurboKiva, HSX, LibSci, MPP Apprentice, SSD, SUPERCLUSTER, UNICOS, and X-MP EA are federally registered trademarks and Because no workstation is an island, CCI, CCMT, CF90, CFT, CFT2, CFT77, ConCurrent Maintenance Tools, COS, Cray Animation Theater, CRAY APP, CRAY C90, CRAY C90D, Cray C++ Compiling System, CrayDoc, CRAY EL, CRAY J90, CRAY J90se, CrayLink, Cray NQS, Cray/REELlibrarian, CRAY S-MP, CRAY SSD-T90, CRAY T90, CRAY T3D, CRAY T3E, CrayTutor, CRAY X-MP, CRAY XMS, CRAY-2, CSIM, CVT, Delivering the power . ., DGauss, Docview, EMDS, GigaRing, HEXAR, IOS, ND Series Network Disk Array, Network Queuing Environment, Network Queuing Tools, OLNET, RQS, SEGLDR, SMARTE, SUPERLINK, System Maintenance and Remote Testing Environment, Trusted UNICOS, UNICOS MAX, and UNICOS/mk are trademarks of Cray Research, Inc.
    [Show full text]
  • CXML Reference Guide Is the Complete Reference Manual for CXML
    Compaq Extended Math Library Reference Guide January 2001 This document describes the Compaq® Extended Math Library (CXML). CXML is a set of high-performance mathematical routines designed for use in numerically intensive scientific and engineering applications. This document is a guide to using CXML and provides reference information about CXML routines. Revision/Update Information: This document has been revised for this release of CXML. Compaq Computer Corporation Houston, Texas © 2001 Compaq Computer Corporation Compaq, the COMPAQ logo, DEC, DIGITAL, VAX, and VMS are registered in the U.S. Patent and Trademark Office. Alpha, Tru64, DEC Fortran, OpenVMS, and VAX FORTRAN are trademarks of Compaq Information Technologies, L.P. in the United States and other countries. Adobe, Adobe Acrobat, and POSTSCRIPT are registered trademarks of Adobe Systems Incorporated. CRAY is a registered trademark of Cray Research, Incorporated. IBM is a registered trademark of International Business Machines Corporation. IEEE is a registered trademark of the Institute of Electrical and Electronics Engineers Inc. IMSL and Visual Numerics are registered trademarks of Visual Numerics, Inc. Intel and Pentium are trademarks of Intel Corporation. KAP is a registered trademark of Kuck and Associates, Inc. Linux is a registered trademark of Linus Torvalds. Microsoft, Windows, and Windows NT are either trademarks or registered trademarks of Microsoft Corporation in the United States and other countries. OpenMP and the OpenMP logo are trademarks of OpenMP Architecture Review Board. SUN, SUN Microsystems, and Java are registered trademarks of Sun Microsystems, Inc. UNIX, Motif, OSF, OSF/1, OSF/Motif, and The Open Group are trademarks of The Open Group. All other trademarks and registered trademarks are the property of their respective holders.
    [Show full text]
  • Performance Variability of Highly Parallel Architectures
    Performance Variability of Highly Parallel Architectures William T.C. Kramer1 and Clint Ryan2 1 Department of Computing Sciences, University of California at Berkeley and the National Energy Research Scienti¯c Computing Center, Lawrence Berkeley National Laboratory 2 Department of Computing Sciences, University of California at Berkeley Abstract. The design and evaluation of high performance computers has concentrated on increasing computational speed for applications. This performance is often measured on a well con¯gured dedicated sys- tem to show the best case. In the real environment, resources are not always dedicated to a single task, and systems run tasks that may influ- ence each other, so run times vary, sometimes to an unreasonably large extent. This paper explores the amount of variation seen across four large distributed memory systems in a systematic manner. It then analyzes the causes for the variations seen and discusses what can be done to decrease the variation without impacting performance. 1 Introduction The design and evaluation of high performance computers concentrates on in- creasing computational performance for applications. Performance is often mea- sured on a well con¯gured dedicated system to show the best case. In the real environment, resources are not always dedicated to a single task, and system run multiple tasks that may influence each other, so run times vary, sometimes to an unreasonably large extent. This paper explores the amount of variation seen across four large distributed memory systems in a systematic manner.[1] It then analyzes the causes for the variations seen and discusses what can be done to decrease the variation without impacting performance.
    [Show full text]
  • Taking the Lead in HPC
    Taking the Lead in HPC Cray X1 Cray XD1 Cray Update SuperComputing 2004 Safe Harbor Statement The statements set forth in this presentation include forward-looking statements that involve risk and uncertainties. The Company wished to caution that a number of factors could cause actual results to differ materially from those in the forward-looking statements. These and other factors which could cause actual results to differ materially from those in the forward-looking statements are discussed in the Company’s filings with the Securities and Exchange Commission. SuperComputing 2004 Copyright Cray Inc. 2 Agenda • Introduction & Overview – Jim Rottsolk • Sales Strategy – Peter Ungaro • Purpose Built Systems - Ly Pham • Future Directions – Burton Smith • Closing Comments – Jim Rottsolk A Long and Proud History NASDAQ: CRAY Headquarters: Seattle, WA Marketplace: High Performance Computing Presence: Systems in over 30 countries Employees: Over 800 BuildingBuilding Systems Systems Engineered Engineered for for Performance Performance Seymour Cray Founded Cray Research 1972 The Father of Supercomputing Cray T3E System (1996) Cray X1 Cray-1 System (1976) Cray-X-MP System (1982) Cray Y-MP System (1988) World’s Most Successful MPP 8 of top 10 Fastest First Supercomputer First Multiprocessor Supercomputer First GFLOP Computer First TFLOP Computer Computer in the World SuperComputing 2004 Copyright Cray Inc. 4 2002 to 2003 – Successful Introduction of X1 Market Share growth exceeded all expectations • 2001 Market:2001 $800M • 2002 Market: about $1B
    [Show full text]
  • Spring 2018 Commencement Program
    TE TA UN S E ST TH AT I F E V A O O E L F A DITAT DEUS N A E R R S I O Z T S O A N Z E I A R I T G R Y A 1912 1885 ArizonA StAte UniverSity CommenCement And ConvoCAtion ProgrAm Spring 2018 May 7–11, 2018 The NaTioNal aNThem The STar-SpaNgled BaNNer O say can you see, by the dawn’s early light, What so proudly we hailed at the twilight’s last gleaming? Whose broad stripes and bright stars through the perilous fight O’er the ramparts we watched, were so gallantly streaming? And the rockets’ red glare, the bombs bursting in air Gave proof through the night that our flag was still there. O say does that Star-Spangled Banner yet wave O’er the land of the free and the home of the brave? alma maTer ariZoNa STaTe UNiVerSiTY Where the bold saguaros Raise their arms on high, Praying strength for brave tomorrows From the western sky; Where eternal mountains Kneel at sunset’s gate, Here we hail thee, Alma Mater, Arizona State. —Hopkins-Dresskell marooN aNd gold Fight, Devils down the field Fight with your might and don’t ever yield Long may our colors outshine all others Echo from the buttes, Give em’ hell Devils! Cheer, cheer for A-S-U! Fight for the old Maroon For it’s Hail! Hail! The gang’s all here And it’s onward to victory! Students whose names appear in this program are candidates for the degrees listed, which will be conferred subject to completion of requirements.
    [Show full text]
  • Tour De Hpcycles
    Tour de HPCycles Wu Feng Allan Snavely [email protected] [email protected] Los Alamos National San Diego Laboratory Supercomputing Center Abstract • In honor of Lance Armstrong’s seven consecutive Tour de France cycling victories, we present Tour de HPCycles. While the Tour de France may be known only for the yellow jersey, it also awards a number of other jerseys for cycling excellence. The goal of this panel is to delineate the “winners” of the corresponding jerseys in HPC. Specifically, each panelist will be asked to award each jersey to a specific supercomputer or vendor, and then, to justify their choices. Wu FENG [email protected] 2 The Jerseys • Green Jersey (a.k.a Sprinters Jersey): Fastest consistently in miles/hour. • Polka Dot Jersey (a.k.a Climbers Jersey): Ability to tackle difficult terrain while sustaining as much of peak performance as possible. • White Jersey (a.k.a Young Rider Jersey): Best "under 25 year-old" rider with the lowest total cycling time. • Red Number (Most Combative): Most aggressive and attacking rider. • Team Jersey: Best overall team. • Yellow Jersey (a.k.a Overall Jersey): Best overall supercomputer. Wu FENG [email protected] 3 Panelists • David Bailey, LBNL – Chief Technologist. IEEE Sidney Fernbach Award. • John (Jay) Boisseau, TACC @ UT-Austin – Director. 2003 HPCwire Top People to Watch List. • Bob Ciotti, NASA Ames – Lead for Terascale Systems Group. Columbia. • Candace Culhane, NSA – Program Manager for HPC Research. HECURA Chair. • Douglass Post, DoD HPCMO & CMU SEI – Chief Scientist. Fellow of APS. Wu FENG [email protected] 4 Ground Rules for Panelists • Each panelist gets SEVEN minutes to present his position (or solution).
    [Show full text]
  • P U R D U E S a L a R I
    THE EXPONENT, 2018 PURDUE SALARIES PAGE 1 !"#$ %&'(&)*+,-,'.)+ !"#$%&'() *%$+,) !-./) 0+-&1) 2/+) !34%5+5$) 4&#5$() $/+) $%$-6) .%74+5(-$#%5) 8%&) -66) 5%59($:"+5$) ;5#<+&(#$0) +746%0++(=) 2/#() &+4%&$) #() &+>+.$#<+) %8) +-&5#5?() 8%&) $/+) @ABC) .-6+5"-&)0+-&1)-()4&%<#"+")D0)E:&":+);5#<+&(#$0)#5).%746#-5.+)F#$/)4:D6#.)&+.%&"()6-F(=) 2/#() #((:+) .%5$-#5() 6-($) 5-7+() G9HI) J9K) F#66) D+) #5.6:"+") #5) L%5"-0'() +"#$#%5= PAGE 2 THE EXPONENT, 2018 PURDUE SALARIES How Purdue’s head coach salaries stack up Mitch Daniels STAFF REPORTS gets a raise !"##$%$& '()#$($*& *(+,-$& ."+& %+$'(/ STAFF REPORTS 0$**& 1',#2& ,0& )"3$*& ".& $-$0(4'##2& 3#'2,0%&'&*3"+(&."+&()$,+&5'+$$+6&74(& F,()&()$&?@AB&*'#'+2&%4,1$&"4(6&,(&,*& ()$,+&5"'5)$*&8'9$&'&)$'#()2&#,-,0%& 0":&+$3"+($1&()'(&<4+14$&<+$*,1$0(& ".&()$,+&":0&8$0("+,0%&()$*$&2"40%& Q,(5)& W'0,$#*& 8'9$*& NB[?6S\\& ,0& 3#'2$+*; ()$& 5'#$01'+& 2$'+;& M$+$P*& )":& W'0/ <4+14$& )$'1& .""(7'##& 5"'5)& =$..& ,$#*P&*'#'+2&5"83'+$*&("&"()$+&40,-$+/ >+")8& 8'2& 0$$1& ("& (),09& '7"4(& *,(,$*&,0&()$&'+$'&'01&,0&()$&>,%&C$0; )":& )$& 3#'0*& ("& *3$01& ),*& ?@AB& C"& *('+(6& #$(P*& 5"83'+$& W'0,$#*P& *'#'+2&'.($+&8'9,0%&()$&1$5,*,"0&("& *'#'+2&.+"8&()$&3'*(&()+$$&2$'+*;&b0& +$8',0& '(& <4+14$;& C)$+$& :'*& '& #"(& ?@AK6&W'0,$#*&:'*&8'9,0%&'&+$3"+(/ ".& *3$54#'(,"0& ,0& D"-$87$+& ()'(& $1& NK[@6S\@6& ,0& ?@AU& W'0,$#*& 3'2& >+")8&8'2&)$'1&*"4()&("&),*&)"8$& +',*$1& ("& '& *(4+12& NJ?B6KU\& '01& ,0& *('($&".&E$0(45926&74(&)$&1$5,1$1&("& ?@AJ&)$&*':&'&N[J6B\\&+',*$&("&8'9$& *('2&,0&F$*(&G'.'2$(($; NJJU6[UB;&D":&()$&<4+14$&3+$*,1$0(&
    [Show full text]
  • Saurashtra University Library Service
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Etheses - A Saurashtra University Library Service Saurashtra University Re – Accredited Grade ‘B’ by NAAC (CGPA 2.93) Mehta, Mihir J., 2006, “Performance measurement and analysis of PC based cluster server using SET of Architecture and modeling a scalable High performance cluster”, thesis PhD, Saurashtra University http://etheses.saurashtrauniversity.edu/id/eprint/339 Copyright and moral rights for this thesis are retained by the author A copy can be downloaded for personal non-commercial research or study, without prior permission or charge. This thesis cannot be reproduced or quoted extensively from without first obtaining permission in writing from the Author. The content must not be changed in any way or sold commercially in any format or medium without the formal permission of the Author When referring to this work, full bibliographic details including the author, title, awarding institution and date of the thesis must be given. Saurashtra University Theses Service http://etheses.saurashtrauniversity.edu [email protected] © The Author “PERFORMANCE MEASUREMENT AND ANALYSIS OF PC BASED CLUSTER SERVER USING SET OF ARCHITECTURE AND MODELING A SCALABLE HIGH PERFORMANCE CLUSTER” A Thesis Submitted to SAURASHTRA UNIVERSITY, RAJKOT For the degree of Doctor of Philosophy In Computer Science and Application By MIHIR JITUBHAI MEHTA Under the Guidance of DR. N. N. JANI Professor and Head, Department of Computer Science, Saurashtra University, Rajkot. April 2006 CERTIFICATE This is to certify that the contents of this Ph.D. thesis entitled ³High Performance Computing Server and Network, Study and Analysis of Performance and Performance Tuning´ are the original research work of Mr.
    [Show full text]