IBM ACTC: Helping to Make Supercomputing Easier

Total Page:16

File Type:pdf, Size:1020Kb

IBM ACTC: Helping to Make Supercomputing Easier IBM ACTC: Helping to Make Supercomputing Easier Luiz DeRose Advanced Computing Technology Center IBM Research HPC Symposium University of Oklahoma [email protected] Thomas J. Watson Research Center PO Box 218 Sept 12, 2002 © 2002 Yorktown Heights, NY 10598 Outline Who we are ¾ Mission statement ¾ Functional overview and organization ¾ History What we do ¾ Industry solutions and activities Education and training STC community building Application consulting Performance tools research Sept 12, 2002 ACTC - © 2002 - Luiz DeRose [email protected] 2 1 ACTC Mission ¾ To close the gap between HPC users and IBM ¾ Conduct research on applications for IBM servers within the scientific and technical community Technical directions Emerging technologies ACTC - Research ¾ Software tools and libraries ¾ HPC applications ¾ Research collaborations ¾ Education and training Focus ¾ AIX and Linux platforms Sept 12, 2002 ACTC - © 2002 - Luiz DeRose [email protected] 3 ACTC Functional Overview Outside Technology: • Linux • Computational Grids ACTC • DPCL, OpenMP, MPI IBM Technology: • Power 4, RS/6000 SP • GPFS Solutions: • PSSP, LAPI • Tools • Libraries Customer Needs: • App Consulting • Reduce Cost of Ownership • Collaboration • Optimized Applications • Education + User Training • Education • User Groups Sept 12, 2002 ACTC - © 2002 - Luiz DeRose [email protected] 4 2 ACTC History Created in September, 1998 ¾ Emphasis on helping new customers to port and optimize on IBM system ¾ Required establishing relationships with scientists on research level Expanded operations via alignment with Web/Server Division: ¾ EMEA extended in April, 1999 ¾ AP extended (TRL) in September, 2000 ¾ Partnership with IBM Centers of Competency (Server Group) Sept 12, 2002 ACTC - © 2002 - Luiz DeRose [email protected] 5 ACTC Education 1st Power4 Workshop ¾ Jan. 24,25, 2001 at Watson Research ¾ Full Architecture Disclosure from Austin 1st Linux Clusters Optimization Workshop ¾ May 25--27, 2001 at MHPCC ¾ Application Tuning for IA32 and Myrinet 2001 European ACTC Workshop on IBM SP ¾ Feb. 19,20, 2001 in Trieste (Cineca) ¾ http://www.cineca.it/actc-workshop/index.html 2001 Japan HPC Forum and ACTC Workshop ¾ July 17--19 2001 in Tokyo and Osaka 1st Linux Clusters Institute Workshop ¾ October 1--5, 2001 at UIUC/NCSA (Urbana) IBM Research Internships at Watson Sept 12, 2002 ACTC - © 2002 - Luiz DeRose [email protected] 6 3 HPC Community: IBM ScicomP Established in 1999 via ACTC Workshop IBM External Customer Organization ¾ Equivalent to “CUG” (Cray User Group) 2000 – US and European groups merge 2001 ¾ 1st European Meeting (SciComp 3) May 8-11, Barcelona, Spain (CEPBA) ¾ US Meeting: October 9-12, Knoxville, TN 2002 - ¾ ScicomP 5, Manchester, England (Daresbury Lab ) ¾ ScicomP 6, Berkeley, CA (NERSC) Joint meeting with SP/XXL 2003 - ¾ ScicomP 7 in Goettingen, Germany, March 3-7 ¾ ScicomP 8 in Minneapolis (MSI), August 5-9 http://www.spscicomp.org Sept 12, 2002 ACTC - © 2002 - Luiz DeRose [email protected] 7 HPC Community: LCI Linux Clusters Institute (LCI) ¾ http://www.linuxclustersinstitute.org ¾ Established April, 2001 ¾ ACTC Partnership with NCSA and HPCERC ¾ Mission: education and training for deployment of Linux clusters in the STC community ¾ Primary activity: intensive, hands-on workshops ¾ Upcoming Workshops Albuquerque, NM (HPCERC) Sep 30-Oct 04, 2002 University of Kentucky, Lexington - Jan 13-17, 2003 The third LCI International Conference on Linux Clusters ¾ October 23-25, 2002; St Petersburg, FL ¾ http://www.linuxclustersinstitute.org/Linux-HPC-Revolution/ Sept 12, 2002 ACTC - © 2002 - Luiz DeRose [email protected] 8 4 Porting Example - NCSC (Cray) ACTC Workshop at NCSC ¾ Bring your code and we show you how to port it (5-day course) ¾ 32 researchers attended ¾ All but one were ported by end of the week the one code not ported was because it was written in PVM, and was not installed on the SP system NCSC now conducts their own workshops Sept 12, 2002 ACTC - © 2002 - Luiz DeRose [email protected] 9 Porting Examples NAVO (Cray) ¾ Production benchmark code required maximum of 380-400 elapsed seconds ACTC ported and tuned this code to run under 315 seconds on the SP ¾ ACTC Workshop at NAVO 8 production codes ported and optimized by end of week EPA (Cray) ¾ One ACTC man-month Convert largest user code to the SP Codes now run 3 to 6 times faster than they did on T3E NASA/Goddard ¾ 5 codes ported with help of one ACTC scientist working onsite for two weeks Sept 12, 2002 ACTC - © 2002 - Luiz DeRose [email protected] 10 5 HPC Tools Requirements User demands for performance tools ¾ e.g., memory analysis tools, MPI and OpenMP tracing tools, system utilization tools Insufficient tools on IBM systems ¾ Limited funding for development ¾ Long testing cycle (18 months) STC Users comfortable with “AS-IS” software Sept 12, 2002 ACTC - © 2002 - Luiz DeRose [email protected] 11 Performance Tools Challenge ... addi r4,r0,1024 fmr fp0,fp0 C = A + B stfd fp31,-8(SP) (c1, c2) = (a1, a2) 6 (b1, b2) mfspr r0,LR a1=1& a2=1e c1bb1& c2bb2 stfd fp30,-16(SP) b1=1& b2=1e c1ba1& c2ba2 lwz r3,T.30.const(RTOC) for i = 1 : 2, stmw r29,-28(SP) ai =? e ci b bi lwz r31,T.34 bi =? e ci b ai lwz r30,T.36.__gen_index_ ai = bi e ci b ai stw r0,8(SP) otherwise, error addis r0,r0,17200 ... User’s mental model of program do not match with executed version ¾ Performance tools must be able to revert this semantic gap Sept 12, 2002 ACTC - © 2002 - Luiz DeRose [email protected] 12 6 ACTC Tools Goals ¾ Help IBM sales in HPC and increase user satisfaction ¾ Complement IBM application performance tools offerings Methodology ¾ Development in-house and in collaboration with Universities and Research Labs ¾ When available, use infrastructure under development by the IBM development divisions ¾ Deployment “AS IS” to make it rapidly available to the user community ¾ Close interaction with application developers for quick feedback, targeting functionality enhancements Focus ¾ AIX and Linux Sept 12, 2002 ACTC - © 2002 - Luiz DeRose [email protected] 13 ACTC Software: Tools Performance Tools ¾ HW Performance Monitor Toolkit (HPM) HPMCount LIBHPM HPMViz CATCH ¾ UTE Gantt Chart MPI trace visualization Sept 12, 2002 ACTC - © 2002 - Luiz DeRose [email protected] 14 7 ACTC Software: Collaborative Tools OMP/OMPI trace and Paraver ¾ Graphical user interface for performance analysis of OpenMP programs ¾ Center for Parallelism of Barcelona (CEPBA) at UPC - Spain SvPablo ¾ Performance Analysis and Visualization system ¾ Dan Reed at University of Illinois Paradyn and Dyninst ¾ Dynamic system for performance analysis ¾ Bart Miller at University of Wisconsin ¾ Jeff Hollingsworth at University of Maryland Sept 12, 2002 ACTC - © 2002 - Luiz DeRose [email protected] 15 ACTC Software: Libraries MPItrace ¾ Performance trace library for MPI analysis TurboMPI ¾ Collective communication functions to enhance MPI performance for SMP nodes TurboSHMEM ¾ “Complete” implementation of Cray SHMEM interface to allow easy and well-tuned porting of Cray applications to the IBM RS/6000 SP MIO (Modular I/O) ¾ I/O prefetching and optimization library to enhance AIX Posix I/O handling Sept 12, 2002 ACTC - © 2002 - Luiz DeRose [email protected] 16 8 Current Research Projects Hardware Performance Monitor (HPM) Toolkit ¾ Data capture, analysis and presentation of hardware performance metrics for application and system data Correlation of application behavior with hardware components Hints for program optimization Dynamic instrumentation and profiler Simulation Infrastructure to Guide Memory Analysis (SIGMA) ¾ Data-centric tool under development to Identify performance problems and inefficiencies caused by the data layout in the memory hierarchy Propose solutions to improve performance in current and new architectures Predict performance Sept 12, 2002 ACTC - © 2002 - Luiz DeRose [email protected] 17 HPMVIZ Sept 12, 2002 ACTC - © 2002 - Luiz DeRose [email protected] 18 9 SIGMA Where is the memory? Motivation ¾ Efficient utilization of the memory subsystem is a critical factor to obtain performance on current and future architectures Goals: ¾ Identify bottlenecks, problems, and inefficiencies in a program due to the memory hierarchy ¾ Support serial and parallel applications ¾ Support present and future architectures Sept 12, 2002Support presentACTC - © 2002and - Luiz futureDeRose [email protected] 19 Current Collaboration Efforts LLNL (CASC) – with Jeff Vetter ¾ Interpreting HW Counters analysis Handling of large volume of multi-dimensional data Correlate hardware events with performance bottlenecks Research Centre Juelich – with Bernd Mohr ¾ Usability and reliability of performance data obtained from performance tools ¾ Feasibility of automaautomatictic performance tools CEPBA (UPC) – with Jesus Labarta ¾ Performance analysis of OpenMP programs ¾ OMP/OMPI Trace & Paraver interface University of Karlsruhe - with Klaus Geers ¾ HPM Collect Sept 12, 2002 ACTC - © 2002 - Luiz DeRose [email protected] 20 10 Program Analysis - Paraver x_solve y_solve z_solve rhs Basics: ¾ Functions ¾ Loops ¾ Locks Hardware counters: ¾ TLB misses ¾ L1 misses ¾ … Sept 12, 2002 ACTC - © 2002 - Luiz DeRose [email protected] 21 Hardware Counters Analysis Derived metrics ¾ Instruction mix %Loads %Flops ¾ Computational intensity flops/loads ¾ Program on architecture miss ratios FLOPS per L1 miss ¾ Performance IPC FPC Sept 12, 2002 ACTC - © 2002 - Luiz DeRose [email protected] 22 11 Analysis of Decomposition Effects 1 x 12 3 x 4 12 x 1 Sept 12, 2002 ACTC - © 2002 - Luiz DeRose [email protected] 23 Summary ACTC is a consolidation of IBM’s highest level of technical expertise in HPC ACTC research is rapidly disseminated to maximize benefits of IBM servers for HPC applications Sept 12, 2002 ACTC - © 2002 - Luiz DeRose [email protected] 24 12.
Recommended publications
  • 30 Years of Moving Individual Atoms
    FEATURES 30 YEARS OF MOVING INDIVIDUAL ATOMS 1 2 l Christopher Lutz and Leo Gross – DOI: https://doi.org/10.1051/epn/2020205 l 1 IBM Research – Almaden, San Jose, California, USA l 2 IBM Research – Zurich,¨ 8803 Ruschlikon,¨ Switzerland In the thirty years since atoms were first positioned individually, the atom-moving capability of scanning probe microscopes has grown to employ a wide variety of atoms and small molecules, yielding custom nanostructures that show unique electronic, magnetic and chemical properties. his year marks the thirtieth anniversary of the publication by IBM researchers Don Eigler and Erhard Schweizer showing that individ- Tual atoms can be positioned precisely into chosen patterns [1]. Tapping the keyboard of a personal computer for 22 continuous hours, they controlled the movement of a sharp tungsten needle to pull 35 individ- ual xenon atoms into place on a surface to spell the letters “IBM” (Figure 1). Eigler and Schweitzer’s demonstration set in motion the use of a newly invented tool, called the scanning tunneling microscope (STM), as the workhorse for nanoscience research. But this achievement did even more than that: it changed the way we think of atoms. m FIG. 2: The STM that Don Eigler and coworkers used to position atoms. The It led us to view them as building blocks that can be tip is seen touching its reflection in the sample’s surface. (Credit: IBM) arranged the way we choose, no longer being limited by the feeling that atoms are inaccessibly small. with just one electron or atom or (small) molecule. FIG.
    [Show full text]
  • The Evolution of Ibm Research Looking Back at 50 Years of Scientific Achievements and Innovations
    FEATURES THE EVOLUTION OF IBM RESEARCH LOOKING BACK AT 50 YEARS OF SCIENTIFIC ACHIEVEMENTS AND INNOVATIONS l Chris Sciacca and Christophe Rossel – IBM Research – Zurich, Switzerland – DOI: 10.1051/epn/2014201 By the mid-1950s IBM had established laboratories in New York City and in San Jose, California, with San Jose being the first one apart from headquarters. This provided considerable freedom to the scientists and with its success IBM executives gained the confidence they needed to look beyond the United States for a third lab. The choice wasn’t easy, but Switzerland was eventually selected based on the same blend of talent, skills and academia that IBM uses today — most recently for its decision to open new labs in Ireland, Brazil and Australia. 16 EPN 45/2 Article available at http://www.europhysicsnews.org or http://dx.doi.org/10.1051/epn/2014201 THE evolution OF IBM RESEARCH FEATURES he Computing-Tabulating-Recording Com- sorting and disseminating information was going to pany (C-T-R), the precursor to IBM, was be a big business, requiring investment in research founded on 16 June 1911. It was initially a and development. Tmerger of three manufacturing businesses, He began hiring the country’s top engineers, led which were eventually molded into the $100 billion in- by one of world’s most prolific inventors at the time: novator in technology, science, management and culture James Wares Bryce. Bryce was given the task to in- known as IBM. vent and build the best tabulating, sorting and key- With the success of C-T-R after World War I came punch machines.
    [Show full text]
  • IBM Kenexa Assessments Solutions Brief and Catalog
    IBM Kenexa Solutions Brief and Catalog IBM Kenexa Assessments Solutions Brief and Catalog Choosing the right person for the right job, and assessing traits, skills and fit for individuals, managers, and leaders is crucial to your success. IBM Kenexa Assessments Catalog IBM Kenexa offers an expansive portfolio of assessments • A multiple-choice format and newer formats of that assess personality traits, problem solving, learned skills, simulation and computer-adaptive testing for job and job/organizational fit. Our assessments are designed for evaluation. all levels of an organization, including individual • Capacity and capability assessments including natural contributors, managers, and leaders. IBM Kenexa Assess on talent in a specific area, work style preferences, acquired Cloud is one of the industry’s largest and most powerful experience, and a combination of cognition and global and mobile capable content solutions available in the knowledge; organizational fit assessments that measure marketplace today. IBM Kenexa Assessments provide you fit in terms of company culture, values, and preferences with industry-leading content and support to find quality for job characteristics; and onboarding assessments to candidates and maximize their performance. help accelerate time-to-productivity of new hires and understand their learning styles. At IBM Kenexa, we know that people differentiate great • Each result includes a detailed candidate profile that companies. We believe empowering companies to hire the reveals strengths and developmental needs as compared right people yields the best business results. IBM Kenexa to the job evaluation; follow-up interview questions employs Ph.D. or licensed Industrial/Organizational (I/O) tailored to the role and the candidate’s specific results; psychologists worldwide.
    [Show full text]
  • IBM Storage Suite for IBM Cloud Paks Data Sheet
    IBM Storage Suite for IBM Cloud Paks Data Sheet IBM Storage Suite for IBM Cloud Paks IBM Storage Suite for Cloud Paks offers Highlights a faster, more reliable way to modernize and move to the cloud • Flexible sofware-defined data resources for easy Today, 85% of enterprises around the world are already container deployments operating in a hybrid cloud environment.1 At the same time, • Reduce risk and complexity more than three-quarters of all business-critical workloads when moving mission- have yet to make the transition to the cloud.2 These metrics critical apps to the cloud suggest that for all the benefits conveyed by cloud • Market-leading Red Hat and computing, many organizations still face plenty of challenges award-winning IBM ahead to fully incorporate cloud-native solutions into their Spectrum Storage software core business environments. Containerization is a key enabling technology for flexibly delivering workloads to private and public clouds and DevOps. Among their many benefits, containers allow legacy applications to run in almost any host environment without being rewritten – an enormous advantage for enterprises with numerous applications, each composed of thousands of lines of code. To facilitate deployment of containerized workloads and development of new cloud-native applications, companies large and small are modernizing around platforms such as Red Hat OpenShift. IBM has developed a series of middleware tools called IBM Cloud Pak solutions designed to enhance and extend the functionality and capabilities of Red Hat OpenShift. The solutions include Cloud Pak for Data, Business Automation, Watson AIOps, Integration, Network Application, and Security, and give enterprises the fully modular and easy-to- consume capabilities they need to bring the next 80% of their workloads into modern, cloud-based environments.
    [Show full text]
  • STM Single Atom/Molecule Manipulation and Its Application to Nanoscience and Technology
    Critical Review article, J. Vac. Sci. Tech. in press (2005). STM Single Atom/Molecule Manipulation and Its Application to Nanoscience and Technology Saw-Wai Hla Department of Physics and Astronomy, Nanoscale and Quantum Phenomena Institute, Ohio University, Athens, OH-45701, USA; email: [email protected] Key Words: STM, atom, molecule, manipulation, nanoscience, nanotechnology. Abstract: Single atom/molecule manipulation with a scanning-tunneling-microscope (STM) tip is an innovative experimental technique of nanoscience. Using STM-tip as an engineering or analytical tool, artificial atomic-scale structures can be fabricated, novel quantum phenomena can be probed, and properties of single atoms and molecules can be studied at an atomic level. The STM manipulations can be performed by precisely controlling tip-sample interactions, by using tunneling electrons, or electric field between the tip and sample. In this article, various STM manipulation techniques and some of their applications based on the author’s experience are described, and the impact of this research area on nanoscience and technology is discussed. * Correspondence to; Saw-Wai Hla, Physics and Astronomy Dept., Ohio University, Clippinger 252 C, Athens, OH-45701, USA. Tel: 740 593 1727, Fax: 740 593 0433. 1 I. INTRODUCTION for a long period of time. Electrochemically etched polycrystalline tungsten wires are used as the tips in most In the mid 20th century, the possibility to image or to see an experiments described here. The tip-apex is usually atom was a matter of great debate. Only after the Nobel- prepared by using in-situ tip formation procedure36 award winning invention of Scanning Tunneling described in the next section.
    [Show full text]
  • The Weather Company, an IBM Business, Helps People Make Informed Expands Decisions and Take Action in the Face of Weather
    Business challenge The Weather Company’s websites serve millions of users per day. When extreme weather hits and usage peaks, the sites must be at their fastest and most reliable to provide the information people need to stay safe. Transformation To optimize for elasticity in handling extreme spikes in demand, The Weather Company worked with IBM to migrate Chris Hill, VP and its web platform quickly and seamlessly Chief Information and from its existing cloud provider to the Technology Officer for IBM Watson Media and IBM® Cloud™. Weather Business benefits: “IBM Cloud is the perfect The Weather Company, engine to power the world’s largest weather Unlocks an IBM Business websites and deliver the significant cost savings on fastest, most accurate cloud hosting and support Migrating the world’s top weather insight to millions of users around the globe.” weather web property to Chris Hill VP, CIO and CTO Accelerates IBM Watson Media and Weather deployment of new services a secure, scalable global with Kubernetes architecture in the IBM Cloud The Weather Company, an IBM Business, helps people make informed Expands decisions and take action in the face of weather. The company offers the most global reach with access accurate forecasts globally with personalized and actionable weather data to a larger number of data and insights to millions of consumers, as well as thousands of marketers and Share this centers in local markets businesses via Weather’s API, its business solutions division, and its own digital products from The Weather Channel
    [Show full text]
  • Cell Processorprocessor Andand Cellcell Processorprocessor Basedbased Devicesdevices
    CellCell ProcessorProcessor andand CellCell ProcessorProcessor BasedBased DevicesDevices May 31, 2005 SPXXL Barry Bolding, WW Deep Computing Thanks to Ashwini Nanda IBM TJ Watson Research Center New York Pathway to the Digital Media Revolution Incremental Technology Innovations Provide Stepping Stones of Progress to the Future Virtual Communities Immersive WEB Portals Virtual Travel & eCommerce 20xx "Matrix" HD Content Virtual Tokyo Creation & Delivery Real-time Creation and Modification of Content Immersive Environment Needs enormous Real-Time Engineering computing Design Collaboration power 2004 Incremental Technology Innovations on the Horizon: On-Demand Computing & Communication Infrastructures Online Games Application Optimized Processor and System Architectures Leading Edge Communication Bandwidth/Storage Capacities Immersion HW and SW Technologies Rich Media Applications, Middleware and OS CellCell ProcessorProcessor OverviewOverview CellCell HistoryHistory • IBM, SCEI/Sony, Toshiba Alliance formed in 2000 • Design Center opened in March 2001 • Based in Austin, Texas • February 7, 2005: First technical disclosures CellCell HighlightsHighlights • Multi-core microprocessor (9 cores) • The possibilities… – Supercomputer on a chip – Digital home to distributed computing and supercomputing – >10x performance potential for many kernels/algorithms • Current prototypes running <3 GHz clock frequency • Demonstrated Beehive at Electronic Entertainment Expo Meeting – TRE (Terrain Rendering Engine application) IntroducingIntroducing CellCell
    [Show full text]
  • The GPU Computing Revolution
    The GPU Computing Revolution From Multi-Core CPUs to Many-Core Graphics Processors A Knowledge Transfer Report from the London Mathematical Society and Knowledge Transfer Network for Industrial Mathematics By Simon McIntosh-Smith Copyright © 2011 by Simon McIntosh-Smith Front cover image credits: Top left: Umberto Shtanzman / Shutterstock.com Top right: godrick / Shutterstock.com Bottom left: Double Negative Visual Effects Bottom right: University of Bristol Background: Serg64 / Shutterstock.com THE GPU COMPUTING REVOLUTION From Multi-Core CPUs To Many-Core Graphics Processors By Simon McIntosh-Smith Contents Page Executive Summary 3 From Multi-Core to Many-Core: Background and Development 4 Success Stories 7 GPUs in Depth 11 Current Challenges 18 Next Steps 19 Appendix 1: Active Researchers and Practitioner Groups 21 Appendix 2: Software Applications Available on GPUs 23 References 24 September 2011 A Knowledge Transfer Report from the London Mathematical Society and the Knowledge Transfer Network for Industrial Mathematics Edited by Robert Leese and Tom Melham London Mathematical Society, De Morgan House, 57–58 Russell Square, London WC1B 4HS KTN for Industrial Mathematics, Surrey Technology Centre, Surrey Research Park, Guildford GU2 7YG 2 THE GPU COMPUTING REVOLUTION From Multi-Core CPUs To Many-Core Graphics Processors AUTHOR Simon McIntosh-Smith is head of the Microelectronics Research Group at the Univer- sity of Bristol and chair of the Many-Core and Reconfigurable Supercomputing Conference (MRSC), Europe’s largest conference dedicated to the use of massively parallel computer architectures. Prior to joining the university he spent fifteen years in industry where he designed massively parallel hardware and software at companies such as Inmos, STMicro- electronics and Pixelfusion, before co-founding ClearSpeed as Vice-President of Architec- ture and Applications.
    [Show full text]
  • OLCF AR 2016-17 FINAL 9-7-17.Pdf
    Oak Ridge Leadership Computing Facility Annual Report 2016–2017 1 Outreach manager – Katie Bethea Writers – Eric Gedenk, Jonathan Hines, Katie Jones, and Rachel Harken Designer – Jason Smith Editor – Wendy Hames Photography – Jason Richards and Carlos Jones Stock images – iStockphoto™ Oak Ridge Leadership Computing Facility Oak Ridge National Laboratory P.O. Box 2008, Oak Ridge, TN 37831-6161 Phone: 865-241-6536 Email: [email protected] Website: https://www.olcf.ornl.gov Facebook: https://www.facebook.com/oakridgeleadershipcomputingfacility Twitter: @OLCFGOV The research detailed in this publication made use of the Oak Ridge Leadership Computing Facility, a US Department of Energy Office of Science User Facility located at DOE’s Oak Ridge National Laboratory. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov. 2 Contents LETTER In a Record 25th Year, We Celebrate the Past and Look to the Future 4 SCIENCE Streamlining Accelerated Computing for Industry 6 A Seismic Mapping Milestone 8 The Shape of Melting in Two Dimensions 10 A Supercomputing First for Predicting Magnetism in Real Nanoparticles 12 Researchers Flip Script for Lithium-Ion Electrolytes to Simulate Better Batteries 14 A Real CAM-Do Attitude 16 FEATURES Big Data Emphasis and New Partnerships Highlight the Path to Summit 18 OLCF Celebrates 25 Years of HPC Leadership 24 PEOPLE & PROGRAMS Groups within the OLCF 28 OLCF User Group and Executive Board 30 INCITE, ALCC, DD 31 SYSTEMS & SUPPORT Resource Overview 32 User Experience 34 Education, Outreach, and Training 35 ‘TitanWeek’ Recognizes Contributions of Nation’s Premier Supercomputer 36 Selected Publications 38 Acronyms 41 3 In a Record 25th Year, We Celebrate the Past and Look to the Future installed at the turn of the new millennium—to the founding of the Center for Computational Sciences at the US Department of Energy’s Oak Ridge National Laboratory.
    [Show full text]
  • Financial Computing on Gpus Lecture 1: Cpus and Gpus Mike Giles
    ' $ Financial Computing on GPUs Lecture 1: CPUs and GPUs Mike Giles [email protected] Oxford-Man Institute for Quantitative Finance Oxford University Mathematical Institute &Lecture 1 1 % ' $ Economics Money drives computing, as much as technology. If there’s a big enough market, someone will develop the product. Need economies of scale to make chips cheaply, so very few companies and competing products. To anticipate computing trends, look at market drivers, key applications. &Lecture 1 2 % ' $ CPUs • chip size/speed continues to doubles every 18-24 months (Moore’s Law) • similar growth in other hardware aspects, but memory bandwidth struggles to keep up • safe to assume that this will continue for at least the next 10 years, driven by: – multimedia applications (e.g. streaming video, HD) – image processing – “intelligent” software &Lecture 1 3 % ' $ Multilevel Parallelism • instruction parallelism (e.g. addition) • pipeline parallelism, overlapping different instructions • multiple pipelines, each with own capabilities • multiple cores (CPUs) within a single chip • multiple chips within a single shared-memory computer • multiple computers within a distributed-memory system • multiple systems within an organisation &Lecture 1 4 % ' $ Ideal Von Neumann Processor • each cycle, CPU takes data from registers, does an operation, and puts the result back • load/store operations (memory ←→ registers) also take one cycle • CPU can do different operations each cycle • output of one operation can be input to next - - - op1 - time - - op2
    [Show full text]
  • Service Description IBM Kenexa Talent Insights This Service Description Describes the Cloud Service IBM Provides to Client
    Service Description IBM Kenexa Talent Insights This Service Description describes the Cloud Service IBM provides to Client. Client means the company and its authorized users and recipients of the Cloud Service. The applicable Quotation and Proof of Entitlement (PoE) are provided as separate Transaction Documents 1. Cloud Service IBM Kenexa Talent Insights with 10 Users IBM Kenexa Talent Insights is a talent analytics solution that helps enable users to quickly gain insight from their human resources data. ● Guided data discovery – the ability for users to select from a set of predefined talent questions to initiate analysis. The questions are based on the talent data templates that are utilized. ● Language – available in English. ● Access – Client receives a url with a username and password for each user to access Talent Insights. Clients who have purchased IBM Kenexa Talent Acquisition, IBM Kenexa Talent Optimization, or IBM Kenexa BrassRing on Cloud will access Talent Insights via a single sign on portal. ● Includes 10 users ● Twitter content ● Client is responsible for maintaining any promises of data confidentiality made to employees when using IBM Kenexa Talent Insights. ● Maximum for drag and drop of load files to Talent Insights is up to 10,000,000 rows and 512 columns. The file size can be up to 4 GB. It is recommended that users load files with fewer than 70 columns for optimal analytics experience. IBM Kenexa Talent Insights is a human resources service offering designed to enable Client to input, manage, sort, and view applicant, hiring, and employee data. The service is provided in a SoftLayer cloud computing environment with dedicated virtual private network connectivity.
    [Show full text]
  • E-Business on Demand- Messaging Guidebook
    IBM Systems & Technology Group IBM Deep Computing Rebecca Austen Director, Deep Computing Marketing © 2005 IBM Corporation 1 IBM Systems & Technology Group Deep Computing Innovation Addressing Challenges Beyond Computation System Design Data – Scalability – Data management – Packaging & density – Archival & compliance – Network infrastructure – Performance & reliability – Power consumption & cooling – Simulation & modeling – Data warehousing & mining Software – Capacity management & virtualization – System management – Security – Software integration Economics – Programming models & – Hybrid financial & delivery productivity models – Software licensing 2 © 2005 IBM Corporation IBM Systems & Technology Group Deep Computing Collaboration Innovation Through Client and Industry Partnerships System, Application & User Requirements, Best Practices – SPXXL, ScicomP – BG Consortium Infrastructure – DEISA, TeraGrid, MareNostrum Software & Open Standards – GPFS evolution – Linux, Grid Research & Development – Technology/systems – Blue Gene, Cell – Collaborative projects – Genographic, WCG 3 © 2005 IBM Corporation IBM Systems & Technology Group Deep Computing Embraces a Broad Spectrum of Markets Life Sciences Research, drug discovery, diagnostics, information-based medicine Digital Media Business Digital content creation, Intelligence management and Data warehousing and distribution data mining Petroleum Oil and gas exploration and production Financial Services Optimizing IT infrastructure, risk management and Industrial/Product compliance,
    [Show full text]