Intel Concurrent Collections for Haskell Ryan Newton, Chih-Ping Chen, and Simon Marlow
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Here I Led Subcommittee Reports Related to Data-Intensive Science and Post-Moore Computing) and in CRA’S Board of Directors Since 2015
Vivek Sarkar Curriculum Vitae Contents 1 Summary 2 2 Education 3 3 Professional Experience 3 3.1 2017-present: College of Computing, Georgia Institute of Technology . 3 3.2 2007-present: Department of Computer Science, Rice University . 5 3.3 1987-2007: International Business Machines Corporation . 7 4 Professional Awards 11 5 Research Awards 12 6 Industry Gifts 15 7 Graduate Student and Other Mentoring 17 8 Professional Service 19 8.1 Conference Committees . 20 8.2 Advisory/Award/Review/Steering Committees . 25 9 Keynote Talks, Invited Talks, Panels (Selected) 27 10 Teaching 33 11 Publications 37 11.1 Refereed Conference and Journal Publications . 37 11.2 Refereed Workshop Publications . 51 11.3 Books, Book Chapters, and Edited Volumes . 58 12 Patents 58 13 Software Artifacts (Selected) 59 14 Personal Information 60 Page 1 of 60 01/06/2020 1 Summary Over thirty years of sustained contributions to programming models, compilers and runtime systems for high performance computing, which include: 1) Leading the development of ASTI during 1991{1996, IBM's first product compiler component for optimizing locality, parallelism, and the (then) new FORTRAN 90 high-productivity array language (ASTI has continued to ship as part of IBM's XL Fortran product compilers since 1996, and was also used as the foundation for IBM's High Performance Fortran compiler product); 2) Leading the research and development of the open source Jikes Research Virtual Machine at IBM during 1998{2001, a first-of-a-kind Java Virtual Machine (JVM) and dynamic compiler implemented -
Slicing (Draft)
Handling Parallelism in a Concurrency Model Mischael Schill, Sebastian Nanz, and Bertrand Meyer ETH Zurich, Switzerland [email protected] Abstract. Programming models for concurrency are optimized for deal- ing with nondeterminism, for example to handle asynchronously arriving events. To shield the developer from data race errors effectively, such models may prevent shared access to data altogether. However, this re- striction also makes them unsuitable for applications that require data parallelism. We present a library-based approach for permitting parallel access to arrays while preserving the safety guarantees of the original model. When applied to SCOOP, an object-oriented concurrency model, the approach exhibits a negligible performance overhead compared to or- dinary threaded implementations of two parallel benchmark programs. 1 Introduction Writing a multithreaded program can have a variety of very different motiva- tions [1]. Oftentimes, multithreading is a functional requirement: it enables ap- plications to remain responsive to input, for example when using a graphical user interface. Furthermore, it is also an effective program structuring technique that makes it possible to handle nondeterministic events in a modular way; develop- ers take advantage of this fact when designing reactive and event-based systems. In all these cases, multithreading is said to provide concurrency. In contrast to this, the multicore revolution has accentuated the use of multithreading for im- proving performance when executing programs on a multicore machine. In this case, multithreading is said to provide parallelism. Programming models for multithreaded programming generally support ei- ther concurrency or parallelism. For example, the Actor model [2] or Simple Con- current Object-Oriented Programming (SCOOP) [3,4] are typical concurrency models: they are optimized for coordination and event handling, and provide safety guarantees such as absence of data races. -
Assessing Gains from Parallel Computation on a Supercomputer
Volume 35, Issue 1 Assessing gains from parallel computation on a supercomputer Lilia Maliar Stanford University Abstract We assess gains from parallel computation on Backlight supercomputer. The information transfers are expensive. We find that to make parallel computation efficient, a task per core must be sufficiently large, ranging from few seconds to one minute depending on the number of cores employed. For small problems, the shared memory programming (OpenMP) and a hybrid of shared and distributive memory programming (OpenMP&MPI) leads to a higher efficiency of parallelization than the distributive memory programming (MPI) alone. I acknowledge XSEDE grant TG-ASC120048, and I thank Roberto Gomez, Phillip Blood and Rick Costa, scientific specialists from the Pittsburgh Supercomputing Center, for technical support. I also acknowledge support from the Hoover Institution and Department of Economics at Stanford University, University of Alicante, Ivie, and the Spanish Ministry of Science and Innovation under the grant ECO2012- 36719. I thank the editor, two anonymous referees, and Eric Aldrich, Yongyang Cai, Kenneth L. Judd, Serguei Maliar and Rafael Valero for useful comments. Citation: Lilia Maliar, (2015) ''Assessing gains from parallel computation on a supercomputer'', Economics Bulletin, Volume 35, Issue 1, pages 159-167 Contact: Lilia Maliar - [email protected]. Submitted: September 17, 2014. Published: March 11, 2015. 1 Introduction The speed of processors was steadily growing over the last few decades. However, this growth has a natural limit (because the speed of electricity along the conducting material is limited and because a thickness and length of the conducting material is limited). The recent progress in solving computationally intense problems is related to parallel computation. -
Ultra-Large-Scale Sites
Ultra-large-scale Sites <working title> – Scalability, Availability and Performance in Social Media Sites (picture from social network visualization?) Walter Kriha With a forword by << >> Walter Kriha, Scalability and Availability Aspects…, V.1.9.1 page 1 03/12/2010 Copyright <<ISBN Number, Copyright, open access>> ©2010 Walter Kriha This selection and arrangement of content is licensed under the Creative Commons Attribution License: http://creativecommons.org/licenses/by/3.0/ online: www.kriha.de/krihaorg/ ... <a rel="license" href="http://creativecommons.org/licenses/by/3.0/de/"><img alt="Creative Commons License" style="border-width:0" src="http://i.creativecommons.org/l/by/3.0/de/88x31.png" /></a><br /><span xmlns:dc="http://purl.org/dc/elements/1.1/" href="http://purl.org/dc/dcmitype/Text" property="dc:title" rel="dc:type"> Building Scalable Social Media Sites</span> by <a xmlns:cc="http://creativecommons.org/ns#" href="wwww.kriha.de/krihaorg/books/ultra.pdf" property="cc:attributionName" rel="cc:attributionURL">Walter Kriha</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/3.0/de/">Creative Commons Attribution 3.0 Germany License</a>.<br />Permissions beyond the scope of this license may be available at <a xmlns:cc="http://creativecommons.org/ns#" href="www.kriha.org" rel="cc:morePermissions">www.kriha.org</a>. Walter Kriha, Scalability and Availability Aspects…, V.1.9.1 page 2 03/12/2010 Acknowledgements <<master course, Todd Hoff/highscalability.com..>> Walter Kriha, Scalability and Availability Aspects…, V.1.9.1 page 3 03/12/2010 ToDo’s - The role of ultra fast networks (Infiniband) on distributed algorithms and behaviour with respect to failure models - more on group behaviour from Clay Shirky etc. -
Optimizing Applications for Multicore by Intel Software Engineer Levent Akyil Welcome to the Parallel Universe
Letter to the Editor by parallelism author and expert James Reinders Are You Ready to Enter a Parallel Universe: Optimizing Applications for Multicore by Intel software engineer Levent Akyil Welcome to the Parallel Universe Contents Think Parallel or Perish, BY JAMES REINDERS .........................................................................................2 James Reinders, Lead Evangelist and a Director with Intel® Software Development Products, sees a future where every software developer needs to be thinking about parallelism first when programming. He first published“ Think Parallel or Perish“ three years ago. Now he revisits his comments to offer an update on where we have gone and what still lies ahead. Parallelization Methodology...................................................................................................................... 4 The four stages of parallel application development addressed by Intel® Parallel Studio. Writing Parallel Code Safely, BY PETER VARHOL ........................................................................... 5 Writing multithreaded code to take full advantage of multiple processors and multicore processors is difficult. The new Intel® Parallel Studio should help us bridge that gap. Are You Ready to Enter a Parallel Universe: Optimizing Applications for Multicore, BY LEVENT AKYIL .............................................. 8 A look at parallelization methods made possible by the new Intel® Parallel Studio—designed for Microsoft Visual Studio* C/C++ developers of Windows* applications. -
Instruction Level Parallelism Example
Instruction Level Parallelism Example Is Jule peaty or weak-minded when highlighting some heckles foreground thenceforth? Homoerotic and commendatory Shelby still pinks his pronephros inly. Overneat Kermit never beams so quaveringly or fecundated any academicians effectively. Summary of parallelism create readable and as with a bit says if we currently being considered to resolve these two machine of a pretty cool explanation. Once plug, it book the parallel grammatical structure which creates a truly memorable phrase. In order to accomplish whereas, a hybrid approach is designed to whatever advantage of streaming SIMD instructions for each faction the awful that executes in parallel on independent cores. For the ILPA, there is one more type of instruction possible, which is the special instruction type for the dedicated hardware units. Advantages and high for example? Two which is already present data is, to process includes comprehensive career related services that instruction level parallelism example how many diverse influences on. Simple uses of parallelism create readable and understandable passages. Also note that a data dependent elements that has to be imported from another core in another processor is much higher than either of the previous two costs. Why the charge of the proton does not transfer to the neutron in the nuclei? The OPENMP code is implemented in advance way leaving each thread can climb up an element from first vector and compare after all the elements in you second vector and forth thread will appear able to execute simultaneously in parallel. To be ready to instruction level parallelism in this allows enormous reduction in memory. -
Minimizing Startup Costs for Performance-Critical Threading
Minimizing Startup Costs for Performance-Critical Threading Anthony M. Castaldo R. Clint Whaley Department of Computer Science Department of Computer Science University of Texas at San Antonio University of Texas at San Antonio San Antonio, TX 78249 San Antonio, TX 78249 Email : [email protected] Email : [email protected] Abstract—Using the well-known ATLAS and LAPACK dense on several eight core systems running a standard Linux OS, linear algebra libraries, we demonstrate that the parallel manage- ATLAS produced alarmingly poor parallel performance even ment overhead (PMO) can grow with problem size on even stati- on compute bound, highly parallelizable problems such as cally scheduled parallel programs with minimal task interaction. Therefore, the widely held view that these thread management matrix multiply. issues can be ignored in such computationally intensive libraries is wrong, and leads to substantial slowdown on today’s machines. Dense linear algebra libraries like ATLAS and LAPACK [2] We survey several methods for reducing this overhead, the are almost ideal targets for parallelism: the problems are best of which we have not seen in the literature. Finally, we regular and often easily decomposed into subproblems of equal demonstrate that by applying these techniques at the kernel level, performance in applications such as LU and QR factorizations complexity, minimizing any need for dynamic task scheduling, can be improved by almost 40% for small problems, and as load balancing or coordination. Many have high data reuse much as 15% for large O(N 3) computations. These techniques and therefore require relatively modest data movement. Until are completely general, and should yield significant speedup in recently, ATLAS achieved good parallel speedup using simple almost any performance-critical operation. -
August 2009 Volume 34 Number 4
AUGUST 2009 VOLUME 34 NUMBER 4 OPINION Musings 2 Rik Farrow FILE SYSTEMS Cumulus: Filesystem Backup to the Cloud 7 Michael VR able, SteFan SaVage, and geoffrey M. VoelkeR THE USENIX MAGAZINE PROGRAMMinG Rethinking Browser Performance 14 leo MeyeRoVich Programming Video Cards for Database Applications 21 tiM kaldewey SECURITY Malware to Crimeware: How Far Have They Gone, and How Do We Catch Up? 35 daVid dittRich HARDWARE A Home-Built NTP Appliance 45 Rudi Van dRunen CoLUMns Practical Perl Tools: Scratch the Webapp Itch with CGI::Application, Part 1 56 daVid n. blank-edelMan Pete’s All Things Sun: T Servers—Why, and Why Not 61 PeteR baeR galVin iVoyeur: Who Invited the Salesmen? 67 daVe JoSePhSen /dev/random 71 RobeRt g. Ferrell BooK REVIEWS Book Reviews 74 elizabeth zwicky et al. USEniX NOTES USENIX Lifetime Achievement Award 78 STUG Award 79 USENIX Association Financial Report for 2008 79 ellie young Writing for ;login: 83 ConfERENCES NSDI ’09 Reports 84 Report on the 8th International Workshop on Peer-to-Peer Systems (IPTPS ’09) 97 Report on the First USENIX Workshop on Hot Topics in Parallelism (HotPar ’09) 99 Report on the 12th Workshop on Hot Topics in Operating Systems (HotOS XII) 109 The Advanced Computing Systems Association aug09covers.indd 1 7.13.09 9:21:47 AM Upcoming Events 22n d ACM Sy M p o S i u M o n op e r A t i n g Sy S t e ms 7t H uSENIX Sy M p o S i u M o n ne t w o r k e d Sy S t e ms prinCipleS (SoSp ’09) de S i g n A n d iM p l e M e n t A t i o n (nSDI ’10) Sponsored by ACM SIGOPS in cooperation with USENIX Sponsored -
18-447 Lecture 21: Parallelism – ILP to Multicores Parallel
CMU 18‐447 S’10 L21‐1 © 2010 J. C. Hoe J. F. Martínez 18‐447 Lecture 21: Parallelism –ILP to Multicores James C. Hoe Dept of ECE, CMU April 7, 2010 Announcements: Lab 4 due this week Optional reading assignments below. Handouts: The Microarchitecture of Superscalar Processors, Smith and Sohi, Proceedings of IEEE, 12/1995. (on Blackboard) The MIPS R10000 Superscalar Microprocessor, Yeager, IEEE Micro, 4/1996. (on Blackboard) Design Challenges of Technology Scaling, Shekhar Borkar, IEEE Micro, 1999. (on Blackboard) CMU 18‐447 S’10 L21‐2 © 2010 J. C. Hoe Parallel Processing 101 J. F. Martínez Assume you have N units of work and each unit of work takes 1 unit‐time on 1 processing element (PE) ‐ with 1 PE, it will take N unit‐time to complete the N units of work ‐ with p PEs, how long does it take to complete the same N units of work? “Ideally”, speedup is “linear” with p runtime speedup runtime speedup= sequential runtimeparalle l N/p 1 p= 1 2 3 4 5 . p= 1 2 3 4 5 . CMU 18‐447 S’10 L21‐3 © 2010 J. C. Hoe It may be linear, but . J. F. Martínez S 4 3 2 1 32 64 p How could this be? CMU 18‐447 S’10 L21‐4 © 2010 J. C. Hoe Parallelization Overhead J. F. Martínez The cheapest algorithm may not be the most scalable, s.t., runtimeparalle l@p=1 = Kruntimesequential and K>1 and speedup = p/K K is known facetiously as the “parallel slowdown” Communications between PEs are not free ‐ a PE may spend extra instructions/time in the act of sending or receiving data ‐ a PE may spend extra time waiting for data to arrive from another PEa function of latency and bandwidth ‐ a PE may spend extra time waiting for another PE to get to a particular point of the computation (a.k.a. -
Performance of Physics-Driven Procedural Animation of Character Locomotion for Bipedal and Quadrupedal Gait
Thesis no: MECS-2015-03 Performance of Physics-Driven Procedural Animation of Character Locomotion For Bipedal and Quadrupedal Gait Jarl Larsson Faculty of Computing Blekinge Institute of Technology SE371 79 Karlskrona, Sweden This thesis is submitted to the Faculty of Computing at Blekinge Institute of Technology in partial fulllment of the requirements for the degree of Master of Science in Engineering: Game and Software Engineering. The thesis is equivalent to 20 weeks of full-time studies. Contact Information: Author: Jarl Larsson E-mail: [email protected] University advisors: Ph.D. Veronica Sundstedt Ph.D. Martin Fredriksson Department of Creative Technologies Faculty of Computing Internet : www.bth.se Blekinge Institute of Technology Phone : +46 455 38 50 00 SE371 79 Karlskrona, Sweden Fax : +46 455 38 50 57 Abstract Context. Animation of character locomotion is an important part of computer animation and games. It is a vital aspect in achieving be- lievable behaviour and articulation for virtual characters. For games there is also often a need for supporting real-time reactive behaviour in an animation as a response to direct or indirect user interaction, which have given rise to procedural solutions to generate animation of locomotion. Objectives. In this thesis the performance aspects for procedurally generating animation of locomotion within real-time constraints is evaluated, for bipeds and quadrupeds, and for simulations of sev- eral characters. A general pose-driven feedback algorithm for physics- driven character locomotion is implemented for this purpose. Methods. The execution time of the locomotion algorithm is evalu- ated using an automated experiment process, in which real-time gait simulations of incrementing character population count are instanti- ated and measured, for the bipedal and quadrupedal gaits. -
Easy Dataflow Programming in Clusters with UPC++ Depspawn
This is the author's version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication. The final version of record is available at http://dx.doi.org/10.1109/TPDS.2018.2884716 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS 1 Easy Dataflow Programming in Clusters with UPC++ DepSpawn Basilio B. Fraguela, Diego Andrade Abstract—The Partitioned Global Address Space (PGAS) programming model is one of the most relevant proposals to improve the ability of developers to exploit distributed memory systems. However, despite its important advantages with respect to the traditional message-passing paradigm, PGAS has not been yet widely adopted. We think that PGAS libraries are more promising than languages because they avoid the requirement to (re)write the applications using them, with the implied uncertainties related to portability and interoperability with the vast amount of APIs and libraries that exist for widespread languages. Nevertheless, the need to embed these libraries within a host language can limit their expressiveness and very useful features can be missing. This paper contributes to the advance of PGAS by enabling the simple development of arbitrarily complex task-parallel codes following a dataflow approach on top of the PGAS UPC++ library, implemented in C++. In addition, our proposal, called UPC++ DepSpawn, relies on an optimized multithreaded runtime that provides very competitive performance, as our experimental evaluation shows. Index Terms—libraries, parallel programming models, distributed memory, multithreading, programmability, dataflow F 1 INTRODUCTION HILE the exploitation of parallelism is never trivial, Namely, the private space is always the one that can be more W this is particularly true in the case of distributed efficiently accessed and the shared local space is accessible memory systems such as clusters. -
Econstor Wirtschaft Leibniz Information Centre Make Your Publications Visible
A Service of Leibniz-Informationszentrum econstor Wirtschaft Leibniz Information Centre Make Your Publications Visible. zbw for Economics Kuchen, Herbert (Ed.); Majchrzak, Tim A. (Ed.); Müller-Olm, Markus (Ed.) Working Paper Tagungsband 16. Kolloquium Programmiersprachen und Grundlagen der Programmierung (KPS'11): 26. bis 28. September 2011, Schloss Raesfeld, Münsterland Arbeitsberichte des Instituts für Wirtschaftsinformatik, No. 132 Provided in Cooperation with: University of Münster, Department of Information Systems Suggested Citation: Kuchen, Herbert (Ed.); Majchrzak, Tim A. (Ed.); Müller-Olm, Markus (Ed.) (2011) : Tagungsband 16. Kolloquium Programmiersprachen und Grundlagen der Programmierung (KPS'11): 26. bis 28. September 2011, Schloss Raesfeld, Münsterland, Arbeitsberichte des Instituts für Wirtschaftsinformatik, No. 132, Westfälische Wilhelms- Universität Münster, Institut für Wirtschaftsinformatik, Münster This Version is available at: http://hdl.handle.net/10419/59558 Standard-Nutzungsbedingungen: Terms of use: Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Documents in EconStor may be saved and copied for your Zwecken und zum Privatgebrauch gespeichert und kopiert werden. personal and scholarly purposes. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle You are not to copy documents for public or commercial Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich purposes, to exhibit the documents publicly, to make them machen, vertreiben oder anderweitig nutzen. publicly available on the internet, or to distribute or otherwise use the documents in public. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, If the documents have been made available under an Open gelten abweichend von diesen Nutzungsbedingungen die in der dort Content Licence (especially Creative Commons Licences), you genannten Lizenz gewährten Nutzungsrechte.