Curriculum Vitæ of Simone Sbaraglia

Total Page:16

File Type:pdf, Size:1020Kb

Curriculum Vitæ of Simone Sbaraglia Curriculum Vitæ of Simone Sbaraglia PERSONAL DATA: ! Name: Simone Sbaraglia Date of Birth: June 27, 1972 Place of Birth: Rome, Italy Address: L.go Leo Longanesi 9, 00142, Rome, Italy E-mail: [email protected] EDUCATION: ! March 2003: Ph.D. in “Mathematical Methods and Models for Technology and Society”, Faculty of Engineering, University of Rome “La Sapienza”. Thesis title: “Efficient Optimization Techniques in Finance and Economics”. October 1998: graduated in mathematics “summa cum laude”. University of Rome “La Sapienza” Thesis title: “Hamilton-Jacobi equations on bounded domains with weak boundary conditions”. CURRENT POSITION: Since October 1, 2005: Professore Associato “Matematica Generale” (Tenured Professor - Research Faculty Member) at ”University of Cagliari”, Italy. PROFESSIONAL CAREER: October 1, 2003 - October 1, 2005: Permanent research position as “Research Staff Member” at the “Advanced Computing and Technology Center, I.B.M. T.J. Watson Research Center, I.B.M. Research, Yorktown Heights, NY. " Chief architect and developer of the “pSigma” performance oriented infrastructure for instrumentation of binary applications. " Chief architect and developer of the “Sigma” project for the simulation of shared and distributed-memory architectures and deep memory hierarchies. " Principal Investigator in the NSF (National Science Foundation) funded research project “Performance Measurements and Modeling of Deep Memory Hierarchy Systems”. " Member of the IBM Task Force on “Performance Modeling and Tools”. " Chief architect of the IBM HPC Toolkit v2. March 1, 2003 - October 8, 2003: Post-doctoral position at the “Advanced Computing and Technology Center”, T.J. Watson Research Center, I.B.M. Research, Yorktown Heights, New York. " Lead developer of “SIGMA”, a data collection framework and cache analysis tool that provides detailed cache information by gathering memory reference data using software- based instrumentation. " Data-Centric performance tools. " Efficient compression and anaylisis of memory traces November 01, 2002 - February 28, 2003: Supplemental position at the T.J. Watson Research Center, I.B.M. Research, Yorktown Heights, New York. " Design and implementation of a simulator of the memory subsystem for the IBM Power3 and Power4 systems " Development of tools to process memory trace files " Simulation and performance analysis of prefetching algorithms January 02, 2002 - October 28, 2002: research grant with the C.N.R.-I.A.C. (National Re- search Council - Institute for Applied Computing) " Design and efficient implementation of algorithms for the approximation of optimality problems arising in financial theory. July 02, 2001 - January 02, 2002: Supplemental position at T.J. Watson Research Center, I.B.M. Research, Yorktown Heights, New York. " Development of architecture-independent memory simulators and tools to analyze and improve the performance of a memory subsystem. April 01, 2000 - June 30, 2001: research grant with the C.N.R.-I.A.C. (National Research Council - Institute for Applied Computing). " Design and implementation of a prototype for the optimal asset-liability management with constraints for insurance companies. " Development of effective approximation methods for Hamilton-Jacobi equations of first and second order. August 01, 1999 - April 01, 2000: research grant with the C.N.R.-I.A.C. " Quantitative analysis of Mortgage-Backed Securities - Development of a mathematical method to price Mortgage-Backed Securities and the development of approximation algorithms to solve the pricing problem numerically. March 01, 1999 - August 01, 1999: collaboration with INA S.p.A.-Capital Markets Department - Modeling of the assets and liabilities of a life insurance company and development of a software package to perform Monte Carlo simulations on the evolution of a life insurance company liabilities. January 01, 1999 - June 06, 1999: research grant with the C.N.R.-I.A.C. " Development of a software package (including graphical interface) for the simulation of the propagation of brakes and the diffusion of chemical polluting in porous materials. PUBLICATIONS: 2013: Patent, “Profiling application performance according to data structure”, I-Hsin Chung, Guojing Cong, David Joseph Klepacki, Simone Sbaraglia, Seetharami R. Seelam, Hui-Fang Wen 2013: Patent, “Binary programmable method for application performance data collection”, I-Hsin Chung, Guojing Cong, David Joseph Klepacki, Simone Sbaraglia, Seetharami R. Seelam, Hui-Fang Wen 2012: Patent, “Programmable framework for automatic tuning of software applications”, I-Hsin Chung, Guojing Cong, David Joseph Klepacki, Simone Sbaraglia, Seetharami R. Seelam, Hui-Fang Wen 2012: Patent, “Automated detection of application performance bottlenecks”, I-Hsin Chung, Guojing Cong, David Joseph Klepacki, Simone Sbaraglia, Seetharami R. Seelam, Hui-Fang Wen S. Sbaraglia, I. Chung, G. Cong, D. Klepacki, S. Seelam, H. Wen, An Extensible Bottleneck Discov- ery Framework, Proceedings of the 2008 ACM/IEEE international conference on Supercomputing, November 15-21, 2008, Austin, TX. S. Sbaraglia, I. Chung, G. Cong, D. Klepacki, S. Seelam, H. Wen, A Framework for Automated Per- formance Bottleneck Detection, 13th International Workshop on High- Level Parallel Programming Models and Supportive Environments, IPDPS 2008,April 14th 2008, Miami, FL. 2008: IBM Research Invention Achievement Award for the Patent Invention: A Method for an Extensible and Programmable Framework for Automating Performance Analysis and Tuning of Software Applications. 2008: IBM Research Invention Achievement Award in recognition for the research conducted on data-centric profiling of scientific applications. Patent Invention: Profiling Application Perfor- mance According to Data Structures. 2007: IBM Research Invention Achievement Award in recognition for the research conducted on the automatic characterization of application performance bottlenecks. Patent Invention: An Extensible Infrastructure for Automated Detection of Application Performance Bottlenecks. 2007: IBM Research Invention Achievement Award in recognition for the research conducted on binary instrumentation and performance data collection. Patent Invention: A Programmable Binary Method for Application Performance Data Collection. S. Sbaraglia, I. Chung, G. Cong, D. Klepacki, S. Seelam, H. Wen, A Productivity Centered Applica- tion Performance Tuning Framework, Proceedings of the II International Conference on Performance Evaluation Methodologies and Tools (Valuetools), Nantes, France. Oct. 23-25 2007. S. Sbaraglia, I. Chung, G. Cong, D. Klepacki, S. Seelam, H. Wen, A Productivity Centered Tools Framework for Application Performance Tuning, Proceedings of the IV International Conference on the Quantitative Evaluation of Systems (QEST07), Edinburgh, Scotland. Sep. 16-19 2007. G. Cong, S. Sbaraglia, A study on the locality behavior of parallel and sequential algorithms for connectivity problems, Proceedings of the International Conference on High Performance Comput- ing,Bangalore, India, Dec.18-21 2006. M. Papi, S. Sbaraglia, Optimal Asset-Liability Management with Constraints: A Dynamic Program- ming Approach, Applied Mathematics and Computation, Vol. 173, n.1, 2006. ISSN: 0096-3003. S. Sbaraglia, J. Odom, L. Derose, K. Ekanadham, J. Hollingsworth, Using Dynamic Tracing Sam- pling to Measure Long Running Programs, Proceedings of the 2005 ACM/ IEEE international con- ference on Supercomputing, November 12-18, 2005, Seattle, WA. S. Sbaraglia, K. Ekanadham, S. Crea, S. Seelam, pSigma: An Infrastructure for Parallel Application Performance Analysis using Symbolic Specifications, Proceedings of the , EWOMP Conference, Oc- tober 18-22, 2004, Stockholm, Sweden.. M. Papi, S. Sbaraglia, Lipschitzian Estimates in Discrete-Time Constrained Optimal Control, “Dy- namics of Continuous, Discrete and Impulsive Systems, Series A: Mathematical Analysis”, Vol.13, n.1, 2006. ISSN: 1201-3390. L.Derose, K.Ekanadham, S.Sbaraglia An Approach for Symbolic Mapping of Memory References, Proceedings of the International Conference Euro-Par 2004, Pisa, 31 Aug-3 Sep 2004. M. Adamo, A. Amadori, M. Bernaschi, C. La Chioma, A. Marigo, B. Piccoli, S. Sbaraglia, A. Uboldi, D. Vergni Optimal Strategies for the Issuance of Public Debt Securities, accepted for publication in the “International Journal of Theoretical and Applied Finance”, vol.7, 2004. M. Papi, S. Sbaraglia, Regularity Properties of Constrained Set-Valued Maps, “Nonlinear Analysis: Theory, Methods & Applications”, Vol. 54, n. 7, 2003. ISSN: 0362-546X. R. Natalini, C. Nitsch, G.Pontrelli, S.Sbaraglia, A numerical study of a nonlocal model of damage propagation under chemical aggression, “European Journal of Applied Mathematics”, Volume 14, Issue 4, (2003) p. 447-464 M. Bernaschi, M. Briani, F. Gozzi, M. Papi, S. Sbaraglia, A model for the optimal asset liability management for insurance companies, “International Journal of Theoretical and Applied Finance”, vol. 6, 227, 2003. L.Derose, K.Ekanadham, J.K.Hollingsworth, S.Sbaraglia SIGMA: A Simulator Infrastructure to Guide Memory Analysis, Proceedings of the 2002 ACM/IEEE conference on Supercomputing, p.1- 13, November 16, 2002, Baltimore, Maryland. OTHER RESEARCH ACTIVITIES : June 2008: Organization of the X Italian-Spanish Congress on Financial and Actuarial Mathe- matics, Cagliari, June 23-25, 2008. Since August 2006: Research consultant of the IBM Productivity Group on the DARPA Percs/HPCS project for the development of High Performance Productive Computing Systems. May 2006:
Recommended publications
  • Cell Processorprocessor Andand Cellcell Processorprocessor Basedbased Devicesdevices
    CellCell ProcessorProcessor andand CellCell ProcessorProcessor BasedBased DevicesDevices May 31, 2005 SPXXL Barry Bolding, WW Deep Computing Thanks to Ashwini Nanda IBM TJ Watson Research Center New York Pathway to the Digital Media Revolution Incremental Technology Innovations Provide Stepping Stones of Progress to the Future Virtual Communities Immersive WEB Portals Virtual Travel & eCommerce 20xx "Matrix" HD Content Virtual Tokyo Creation & Delivery Real-time Creation and Modification of Content Immersive Environment Needs enormous Real-Time Engineering computing Design Collaboration power 2004 Incremental Technology Innovations on the Horizon: On-Demand Computing & Communication Infrastructures Online Games Application Optimized Processor and System Architectures Leading Edge Communication Bandwidth/Storage Capacities Immersion HW and SW Technologies Rich Media Applications, Middleware and OS CellCell ProcessorProcessor OverviewOverview CellCell HistoryHistory • IBM, SCEI/Sony, Toshiba Alliance formed in 2000 • Design Center opened in March 2001 • Based in Austin, Texas • February 7, 2005: First technical disclosures CellCell HighlightsHighlights • Multi-core microprocessor (9 cores) • The possibilities… – Supercomputer on a chip – Digital home to distributed computing and supercomputing – >10x performance potential for many kernels/algorithms • Current prototypes running <3 GHz clock frequency • Demonstrated Beehive at Electronic Entertainment Expo Meeting – TRE (Terrain Rendering Engine application) IntroducingIntroducing CellCell
    [Show full text]
  • The GPU Computing Revolution
    The GPU Computing Revolution From Multi-Core CPUs to Many-Core Graphics Processors A Knowledge Transfer Report from the London Mathematical Society and Knowledge Transfer Network for Industrial Mathematics By Simon McIntosh-Smith Copyright © 2011 by Simon McIntosh-Smith Front cover image credits: Top left: Umberto Shtanzman / Shutterstock.com Top right: godrick / Shutterstock.com Bottom left: Double Negative Visual Effects Bottom right: University of Bristol Background: Serg64 / Shutterstock.com THE GPU COMPUTING REVOLUTION From Multi-Core CPUs To Many-Core Graphics Processors By Simon McIntosh-Smith Contents Page Executive Summary 3 From Multi-Core to Many-Core: Background and Development 4 Success Stories 7 GPUs in Depth 11 Current Challenges 18 Next Steps 19 Appendix 1: Active Researchers and Practitioner Groups 21 Appendix 2: Software Applications Available on GPUs 23 References 24 September 2011 A Knowledge Transfer Report from the London Mathematical Society and the Knowledge Transfer Network for Industrial Mathematics Edited by Robert Leese and Tom Melham London Mathematical Society, De Morgan House, 57–58 Russell Square, London WC1B 4HS KTN for Industrial Mathematics, Surrey Technology Centre, Surrey Research Park, Guildford GU2 7YG 2 THE GPU COMPUTING REVOLUTION From Multi-Core CPUs To Many-Core Graphics Processors AUTHOR Simon McIntosh-Smith is head of the Microelectronics Research Group at the Univer- sity of Bristol and chair of the Many-Core and Reconfigurable Supercomputing Conference (MRSC), Europe’s largest conference dedicated to the use of massively parallel computer architectures. Prior to joining the university he spent fifteen years in industry where he designed massively parallel hardware and software at companies such as Inmos, STMicro- electronics and Pixelfusion, before co-founding ClearSpeed as Vice-President of Architec- ture and Applications.
    [Show full text]
  • OLCF AR 2016-17 FINAL 9-7-17.Pdf
    Oak Ridge Leadership Computing Facility Annual Report 2016–2017 1 Outreach manager – Katie Bethea Writers – Eric Gedenk, Jonathan Hines, Katie Jones, and Rachel Harken Designer – Jason Smith Editor – Wendy Hames Photography – Jason Richards and Carlos Jones Stock images – iStockphoto™ Oak Ridge Leadership Computing Facility Oak Ridge National Laboratory P.O. Box 2008, Oak Ridge, TN 37831-6161 Phone: 865-241-6536 Email: [email protected] Website: https://www.olcf.ornl.gov Facebook: https://www.facebook.com/oakridgeleadershipcomputingfacility Twitter: @OLCFGOV The research detailed in this publication made use of the Oak Ridge Leadership Computing Facility, a US Department of Energy Office of Science User Facility located at DOE’s Oak Ridge National Laboratory. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov. 2 Contents LETTER In a Record 25th Year, We Celebrate the Past and Look to the Future 4 SCIENCE Streamlining Accelerated Computing for Industry 6 A Seismic Mapping Milestone 8 The Shape of Melting in Two Dimensions 10 A Supercomputing First for Predicting Magnetism in Real Nanoparticles 12 Researchers Flip Script for Lithium-Ion Electrolytes to Simulate Better Batteries 14 A Real CAM-Do Attitude 16 FEATURES Big Data Emphasis and New Partnerships Highlight the Path to Summit 18 OLCF Celebrates 25 Years of HPC Leadership 24 PEOPLE & PROGRAMS Groups within the OLCF 28 OLCF User Group and Executive Board 30 INCITE, ALCC, DD 31 SYSTEMS & SUPPORT Resource Overview 32 User Experience 34 Education, Outreach, and Training 35 ‘TitanWeek’ Recognizes Contributions of Nation’s Premier Supercomputer 36 Selected Publications 38 Acronyms 41 3 In a Record 25th Year, We Celebrate the Past and Look to the Future installed at the turn of the new millennium—to the founding of the Center for Computational Sciences at the US Department of Energy’s Oak Ridge National Laboratory.
    [Show full text]
  • Financial Computing on Gpus Lecture 1: Cpus and Gpus Mike Giles
    ' $ Financial Computing on GPUs Lecture 1: CPUs and GPUs Mike Giles [email protected] Oxford-Man Institute for Quantitative Finance Oxford University Mathematical Institute &Lecture 1 1 % ' $ Economics Money drives computing, as much as technology. If there’s a big enough market, someone will develop the product. Need economies of scale to make chips cheaply, so very few companies and competing products. To anticipate computing trends, look at market drivers, key applications. &Lecture 1 2 % ' $ CPUs • chip size/speed continues to doubles every 18-24 months (Moore’s Law) • similar growth in other hardware aspects, but memory bandwidth struggles to keep up • safe to assume that this will continue for at least the next 10 years, driven by: – multimedia applications (e.g. streaming video, HD) – image processing – “intelligent” software &Lecture 1 3 % ' $ Multilevel Parallelism • instruction parallelism (e.g. addition) • pipeline parallelism, overlapping different instructions • multiple pipelines, each with own capabilities • multiple cores (CPUs) within a single chip • multiple chips within a single shared-memory computer • multiple computers within a distributed-memory system • multiple systems within an organisation &Lecture 1 4 % ' $ Ideal Von Neumann Processor • each cycle, CPU takes data from registers, does an operation, and puts the result back • load/store operations (memory ←→ registers) also take one cycle • CPU can do different operations each cycle • output of one operation can be input to next - - - op1 - time - - op2
    [Show full text]
  • E-Business on Demand- Messaging Guidebook
    IBM Systems & Technology Group IBM Deep Computing Rebecca Austen Director, Deep Computing Marketing © 2005 IBM Corporation 1 IBM Systems & Technology Group Deep Computing Innovation Addressing Challenges Beyond Computation System Design Data – Scalability – Data management – Packaging & density – Archival & compliance – Network infrastructure – Performance & reliability – Power consumption & cooling – Simulation & modeling – Data warehousing & mining Software – Capacity management & virtualization – System management – Security – Software integration Economics – Programming models & – Hybrid financial & delivery productivity models – Software licensing 2 © 2005 IBM Corporation IBM Systems & Technology Group Deep Computing Collaboration Innovation Through Client and Industry Partnerships System, Application & User Requirements, Best Practices – SPXXL, ScicomP – BG Consortium Infrastructure – DEISA, TeraGrid, MareNostrum Software & Open Standards – GPFS evolution – Linux, Grid Research & Development – Technology/systems – Blue Gene, Cell – Collaborative projects – Genographic, WCG 3 © 2005 IBM Corporation IBM Systems & Technology Group Deep Computing Embraces a Broad Spectrum of Markets Life Sciences Research, drug discovery, diagnostics, information-based medicine Digital Media Business Digital content creation, Intelligence management and Data warehousing and distribution data mining Petroleum Oil and gas exploration and production Financial Services Optimizing IT infrastructure, risk management and Industrial/Product compliance,
    [Show full text]
  • Legendsincomputing.Pdf
    Anita Jones z 2007 IEEE Founders Medal z Director of Defense Research and Engineering at the U.S. Department of Defense from 1993 to 1997 z Fellow of the z Association for Computing Machinery (ACM) z American Association for the Advancement of Science z IEEE z Author of two books and more than 40 papers z U.S. Air Force Meritorious Civilian Service Award z Distinguished Public Service Award z Congressional Record tribute z Augusta Ada Lovelace Award from the Association for Women in Computing z Lawrence R. Quarles Professor in the Computer Science Department at the University of Virginia’s School of Engineering and Applied Science Legends in Computing Amy Pearl Designer and implementer of the Sun Link Service, an open protocol for creating hypertext links between elements of desktop applications Legends in Computing Programming the Eniac z Programs were not stored z Every new problem required new connections Legends in Computing Stephanie Rosenthal z Computing Research Association Outstanding Female Undergraduate Award, 2007 z research at CMU on social robotics led to two publications. z research on collaborative learning, potential interfaces for use with interactive whiteboards and experiments about issues in collaboration, resulted in a first-authored publication. Legends in Computing 1950s Assembler Programming Class This would be so much easier with a computer… Legends in Computing Elaine Kant z Founder and president of SciComp z Fellow of the American Association for Artificial Intelligence z Fellow of the American Association for the Advancement of Science z Outstanding Achievement Award in Science/Technology, from University YWCA z U.S. Patent No.
    [Show full text]
  • IBM Power Systems Compiler Roadmap
    Software Group Compilation Technology IBM Power Systems Compiler Roadmap Roch Archambault IBM Toronto Laboratory [email protected] SCICOMP Barcelona | May 21, 2009 @ 2009 IBM Corporation Software Group Compilation Technology IBM Rational Disclaimer © Copyright IBM Corporation 2008. All rights reserved. The information contained in these materials is provided for informational purposes only, and is provided AS IS without warranty of any kind, express or implied. IBM shall not be responsible for any damages arising out of the use of, or otherwise related to, these materials. Nothing contained in these materials is intended to, nor shall have the effect of, creating any warranties or representations from IBM or its suppliers or licensors, or altering the terms and conditions of the applicable license agreement governing the use of IBM software. References in these materials to IBM products, programs, or services do not imply that they will be available in all countries in which IBM operates. Product release dates and/or capabilities referenced in these materials may change at any time at IBM’s sole discretion based on market opportunities or other factors, and are not intended to be a commitment to future product or feature availability in any way. IBM, the IBM logo, Rational, the Rational logo, Telelogic, the Telelogic logo, and other IBM products and services are trademarks of the International Business Machines Corporation, in the United States, other countries or both. Other company, product, or service names may be trademarks or service marks of others. 2 SCICOMP Barcelona | IBM Power Systems Compiler Roadmap © 2009 IBM Corporation Software Group Compilation Technology Agenda .
    [Show full text]
  • IBM ACTC: Helping to Make Supercomputing Easier
    IBM ACTC: Helping to Make Supercomputing Easier Luiz DeRose Advanced Computing Technology Center IBM Research HPC Symposium University of Oklahoma [email protected] Thomas J. Watson Research Center PO Box 218 Sept 12, 2002 © 2002 Yorktown Heights, NY 10598 Outline Who we are ¾ Mission statement ¾ Functional overview and organization ¾ History What we do ¾ Industry solutions and activities Education and training STC community building Application consulting Performance tools research Sept 12, 2002 ACTC - © 2002 - Luiz DeRose [email protected] 2 1 ACTC Mission ¾ To close the gap between HPC users and IBM ¾ Conduct research on applications for IBM servers within the scientific and technical community Technical directions Emerging technologies ACTC - Research ¾ Software tools and libraries ¾ HPC applications ¾ Research collaborations ¾ Education and training Focus ¾ AIX and Linux platforms Sept 12, 2002 ACTC - © 2002 - Luiz DeRose [email protected] 3 ACTC Functional Overview Outside Technology: • Linux • Computational Grids ACTC • DPCL, OpenMP, MPI IBM Technology: • Power 4, RS/6000 SP • GPFS Solutions: • PSSP, LAPI • Tools • Libraries Customer Needs: • App Consulting • Reduce Cost of Ownership • Collaboration • Optimized Applications • Education + User Training • Education • User Groups Sept 12, 2002 ACTC - © 2002 - Luiz DeRose [email protected] 4 2 ACTC History Created in September, 1998 ¾ Emphasis on helping new customers to port and optimize on IBM system ¾ Required establishing relationships with scientists on research level Expanded operations via alignment with Web/Server Division: ¾ EMEA extended in April, 1999 ¾ AP extended (TRL) in September, 2000 ¾ Partnership with IBM Centers of Competency (Server Group) Sept 12, 2002 ACTC - © 2002 - Luiz DeRose [email protected] 5 ACTC Education 1st Power4 Workshop ¾ Jan.
    [Show full text]
  • Regular Expressions Direct Urls
    Direct_URLs Combined_Search_Strings Combined_Titles https://www.regular-expressions.info/ "regular expressions" Regular-Expressions.info - Regex Tutorial, Examples and ... regex best online resources (B_String) Regular-Expressions.info - Regex Tutorial, Examples and ... "regular expressions" tutorial Regular-Expressions.info - Regex Tutorial, Examples and ... https://www.regular-expressions.info/dotnet.html "regular expressions" c# Using Regular Expressions with .NET - C# and Visual Basic https://www.regular-expressions.info/examples.html "regular expressions" examples Regular Expression Examples - Regular-Expressions.info https://www.regular-expressions.info/java.html "regular expressions" java Using Regular Expressions in Java https://www.regular-expressions.info/javascript.html "regular expressions" javascript JavaScript RegExp Object - Using Regular Expressions with ... https://www.regular-expressions.info/javascriptexample.html "regular expressions" javascript JavaScript RegExp Example - Regular-Expressions.info "regular expressions" tester JavaScript RegExp Example: Online Regular Expression Tester https://www.regular-expressions.info/python.html "regular expressions" python Python re Module - Use Regular Expressions with Python ... https://www.regular-expressions.info/reference.html regex cheat sheet (B_String) Regular Expressions Reference - Regular-Expressions.info https://www.regular-expressions.info/rlanguage.html "regular expressions" in r Regular Expressions with grep, regexp and sub in the R ... https://www.regular-expressions.info/tcl.html
    [Show full text]
  • Downloading It and Then, by Using Software Technology from Across the Alliance Partners – Xilinx, the Tools, Activate Their Internal ‘Switches’
    news The University of Edinburgh Issue 54, summer 2005 www.epcc.ed.ac.uk CONTENTS 2 High End Computing training 3 Advanced Computing Facility opens 4 Computational science on Blue Gene 5 Working on QCDOC 6 FPGA computing 8 QCDOC and Blue Gene workshop 9 Computational biology on a Power5 system 10 Interview with Professor Ken Bowler 13 EPCC at Sun HPC 14 ISC’05 report 15 ScicomP11 in Edinburgh 16 SuperComputing’05 Third Annual HPCx Seminar New arrivals at the zoo ��� ��������������������������������� FOCUS: HARDWARE Editorial Gavin J. Pringle To mark our 15th anniversary, this issue focuses on HPC Jason Crain reports on the biological simulations running platforms at EPCC. The majority of articles are concerned on the new IBM Power5 system, and I report on EPCC’s with the most recent additions to EPCC’s ‘zoo’ of machines, presence at SunHPC in Heidelberg this year. such as the IBM Blue Gene platform, QCDOC and the new FPGA machine, among others. Just before he took early retirement, Prof. Ken Bowler, one of EPCC’s original founders, talked to me about his involvement This zoo has become so large that we have built a new facility with EPCC, supercomputers he has known, and his thoughts to house it: the Advanced Computing Facility (ACF). The ACF on the future. We present a condensed paraphrasing of that was formally opened on the 1st of July by His Royal Highness, conversation. the Duke of Edinburgh, and this event is covered on the opposite page. For the second year running, EPCC had a booth at the International Supercomputing Conference (ISC) in The new FPGA machine was also launched recently.
    [Show full text]
  • Introduction to Parallel Programming
    Introduction to parallel programming SULI seminar series June 20, 2019 Stéphane Ethier ([email protected]) Princeton Plasma Physics Lab Why Parallel Computing? Why not run n instances of my code? Isn’t that parallel computing? YES… but • You want to speed up your calculation because it takes a week to run! • Your problem size is too large to fit in the memory of a single node • Want to use those extra cores on your “multicore” processor • Solution: – Split the work between several processor cores so that they can work in parallel – Exchange data between them when needed • How? – Message Passing Interface (MPI) on distributed memory systems (works also on shared memory nodes) – OpenMP directives on shared memory node – and some other methods not as popular (pthreads, Intel TBB, Fortran Co-Arrays) Big Science requires Big Computers 15,312 nodes, 979,968 cores Nov 2018 List of the top supercomputers in the world (www.top500.org) SUMMIT - #1 World Supercomputer (200 PFLOPS) at the Oak Ridge Leadership Computing Facility • 4,600 IBM AC922 nodes • Each node contains: – 2 IBM Power 9 processors (42 cores) – 6 Nvidia V100 Tesla GPUs – 512 GB DdR4 memory for CPUs – 96 GB HBM2 memory for GPUs – 1.6 TB NVMe storage • Power 9 processor – SIMd Multi-Core (21) – 4 hardware threads per core – 32 kB L1 private cache, 512 kB shared L2, 10 MB shared L3 • V100 “Volta” GPU – 7.8 TFLOPS double precision (X 6) • dual-rail EdR Infiniband interconnect between nodes SUMMIT node architecture Interconnect Interconnect Next Next node node Power9 CPU architecture 512
    [Show full text]
  • Capability Computing the Newsletter of the Hpcx Community [ISSUE 10, AUTUMN 2007]
    Capability Computing The newsletter of the HPCx community [ISSUE 10, AUTUMN 2007] HECToR... Now online See page 4. Modelling the ocean with HPCx CONTENTS 2 Sixth DEISA training event 12 Modelling non-adiabatic processes 3 Complementary capability computing and the future of HPCx 15 The MSc in HPC: looking back, looking forward 4 HECToR service is ready 16 Programming, parallelism, petaflops... panic! 5 John Fisher: an appreciation 17 HPC-Europa 7 Single node performance on HPCx Phase 3 18 Service Administration from EPCC (SAFE) 8 High resolution modelling of the northwest European shelf seas using POLCOMS 19 Event reviews 10 Profiling parallel codes on HPCx 20 Forthcoming events Editorial Kenton D’Mellow, EPCC Welcome to the latest edition of Capability Computing, the HPCx includes feature articles on state-of-the-art oceanographic community newsletter. This landmark tenth edition marks five simulations, scientific calculations enabled by novel computational successful years of service, over which the machine itself has taken methods that specifically exploit parallelism, the modelling of many forms. We are now well into Phase 3, and have recently radiation damage in metals, and charge and energy transfer opened a new large jobs queue of 1536 processors. We hope to processes in nanoscale systems. interest several consortia with this prospect. We are also pleased to present technical updates and primers from These are exciting times for the HPC community. In this issue our own Applications Support and Terascaling teams: these include we announce the arrival of HECToR, the next generation of UK single-node performance of the Phase 3 system, a guide to profiling national academic supercomputing service.
    [Show full text]