BCL: a Cross-Platform Distributed Data Structures Library
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
A Trusted Mechanised Javascript Specification
A Trusted Mechanised JavaScript Specification Martin Bodin Arthur Charguéraud Daniele Filaretti Inria & ENS Lyon Inria & LRI, Université Paris Sud, CNRS Imperial College London [email protected] [email protected] d.fi[email protected] Philippa Gardner Sergio Maffeis Daiva Naudžiunien¯ e˙ Imperial College London Imperial College London Imperial College London [email protected] sergio.maff[email protected] [email protected] Alan Schmitt Gareth Smith Inria Imperial College London [email protected] [email protected] Abstract sation was crucial. Client code that works on some of the main JavaScript is the most widely used web language for client-side ap- browsers, and not others, is not useful. The first official standard plications. Whilst the development of JavaScript was initially just appeared in 1997. Now we have ECMAScript 3 (ES3, 1999) and led by implementation, there is now increasing momentum behind ECMAScript 5 (ES5, 2009), supported by all browsers. There is the ECMA standardisation process. The time is ripe for a formal, increasing momentum behind the ECMA standardisation process, mechanised specification of JavaScript, to clarify ambiguities in the with plans for ES6 and 7 well under way. ECMA standards, to serve as a trusted reference for high-level lan- JavaScript is the only language supported natively by all major guage compilation and JavaScript implementations, and to provide web browsers. Programs written for the browser are either writ- a platform for high-assurance proofs of language properties. ten directly in JavaScript, or in other languages which compile to We present JSCert, a formalisation of the current ECMA stan- JavaScript. -
Benchmarking the Intel FPGA SDK for Opencl Memory Interface
The Memory Controller Wall: Benchmarking the Intel FPGA SDK for OpenCL Memory Interface Hamid Reza Zohouri*†1, Satoshi Matsuoka*‡ *Tokyo Institute of Technology, †Edgecortix Inc. Japan, ‡RIKEN Center for Computational Science (R-CCS) {zohour.h.aa@m, matsu@is}.titech.ac.jp Abstract—Supported by their high power efficiency and efficiency on Intel FPGAs with different configurations recent advancements in High Level Synthesis (HLS), FPGAs are for input/output arrays, vector size, interleaving, kernel quickly finding their way into HPC and cloud systems. Large programming model, on-chip channels, operating amounts of work have been done so far on loop and area frequency, padding, and multiple types of blocking. optimizations for different applications on FPGAs using HLS. However, a comprehensive analysis of the behavior and • We outline one performance bug in Intel’s compiler, and efficiency of the memory controller of FPGAs is missing in multiple deficiencies in the memory controller, leading literature, which becomes even more crucial when the limited to significant loss of memory performance for typical memory bandwidth of modern FPGAs compared to their GPU applications. In some of these cases, we provide work- counterparts is taken into account. In this work, we will analyze arounds to improve the memory performance. the memory interface generated by Intel FPGA SDK for OpenCL with different configurations for input/output arrays, II. METHODOLOGY vector size, interleaving, kernel programming model, on-chip channels, operating frequency, padding, and multiple types of A. Memory Benchmark Suite overlapped blocking. Our results point to multiple shortcomings For our evaluation, we develop an open-source benchmark in the memory controller of Intel FPGAs, especially with respect suite called FPGAMemBench, available at https://github.com/ to memory access alignment, that can hinder the programmer’s zohourih/FPGAMemBench. -
Mainstay VP Income Builder Portfolio Proxy Voting Record
MainStay VP Income Builder Portfolio ******************************* FORM N-Px REPORT ******************************* ICA File Number: 811-03833 Reporting Period: 07/01/2020 - 06/30/2021 MainStay VP Funds Trust ===================== MainStay VP Income Builder Portfolio ===================== ABBVIE INC. Ticker: ABBV Security ID: 00287Y109 Meeting Date: MAY 07, 2021 Meeting Type: Annual Record Date: MAR 08, 2021 # Proposal Mgt Rec Vote Cast Sponsor 1.1 Elect Director Roxanne S. Austin For For Management 1.2 Elect Director Richard A. Gonzalez For For Management 1.3 Elect Director Rebecca B. Roberts For For Management 1.4 Elect Director Glenn F. Tilton For For Management 2 Ratify Ernst & Young LLP as Auditors For For Management 3 Advisory Vote to Ratify Named For For Management Executive Officers' Compensation 4 Amend Omnibus Stock Plan For For Management 5 Amend Nonqualified Employee Stock For For Management Purchase Plan 6 Eliminate Supermajority Vote For For Management Requirement 7 Report on Lobbying Payments and Policy Against For Shareholder 8 Require Independent Board Chair Against Against Shareholder -------------------------------------------------------------------------------- ALLIANZ SE Ticker: ALV Security ID: D03080112 Meeting Date: MAY 05, 2021 Meeting Type: Annual Record Date: # Proposal Mgt Rec Vote Cast Sponsor 1 Receive Financial Statements and None None Management Statutory Reports for Fiscal Year 2020 (Non-Voting) 2 Approve Allocation of Income and For Did Not Vote Management Dividends of EUR 9.60 per Share 3 Approve Discharge of Management Board For Did Not Vote Management for Fiscal Year 2020 4 Approve Discharge of Supervisory Board For Did Not Vote Management for Fiscal Year 2020 5 Approve Remuneration Policy For Did Not Vote Management 6 Approve Remuneration of Supervisory For Did Not Vote Management Board 7 Amend Articles Re: Supervisory Board For Did Not Vote Management Term of Office Page 1 MainStay VP Income Builder Portfolio -------------------------------------------------------------------------------- ALTRIA GROUP, INC. -
Advances, Applications and Performance of The
1 Introduction The two predominant classes of programming models for MIMD concurrent computing are distributed memory and ADVANCES, APPLICATIONS AND shared memory. Both shared memory and distributed mem- PERFORMANCE OF THE GLOBAL ory models have advantages and shortcomings. Shared ARRAYS SHARED MEMORY memory model is much easier to use but it ignores data PROGRAMMING TOOLKIT locality/placement. Given the hierarchical nature of the memory subsystems in modern computers this character- istic can have a negative impact on performance and scal- Jarek Nieplocha1 1 ability. Careful code restructuring to increase data reuse Bruce Palmer and replacing fine grain load/stores with block access to Vinod Tipparaju1 1 shared data can address the problem and yield perform- Manojkumar Krishnan ance for shared memory that is competitive with mes- Harold Trease1 2 sage-passing (Shan and Singh 2000). However, this Edoardo Aprà performance comes at the cost of compromising the ease of use that the shared memory model advertises. Distrib- uted memory models, such as message-passing or one- Abstract sided communication, offer performance and scalability This paper describes capabilities, evolution, performance, but they are difficult to program. and applications of the Global Arrays (GA) toolkit. GA was The Global Arrays toolkit (Nieplocha, Harrison, and created to provide application programmers with an inter- Littlefield 1994, 1996; Nieplocha et al. 2002a) attempts to face that allows them to distribute data while maintaining offer the best features of both models. It implements a the type of global index space and programming syntax shared-memory programming model in which data local- similar to that available when programming on a single ity is managed by the programmer. -
Technology Services
CLOUD MANAGED SERVICES AND HOSTING SECTOR REVIEW | Q1 2020 Technology Services IT Services | Q2 2021 TECHNOLOGY, MEDIA & TELECOM PAGE | 0 Select Technology Services | IT Services M&A Transactions a Announced June 3, 2021 Thrive Acquired ONI Managed Services • Thrive, a premier provider of NextGen managed services, acquired ONI, a leading U.K. cloud, hybrid-managed IT, Cisco Gold Partner, data-center services company. • ONI will expand Thrive’s geographic footprint, both domestically and internationally, as well as enhancing the company’s Cisco WAN, unified communication and cloud expertise. FireEye Announces Sale of FireEye Products Business to Symphony Technology Group for $1.2 Billionb Managed Security & Announced June 2, 2021 Consulting • The transaction separates FireEye’s network, email, endpoint, and cloud security products, along with the related security management and orchestration platform, from Mandiant’s controls-agnostic software and services. • For FireEye products, this means “strengthened channel relationships” with managed security service providers (MSSP) based on integration alliances with complementary cybersecurity product vendors. c Announced June 1, 2021 Cerberus Capital Acquired Red River Technology from Acacia Partners Federal Managed Services • Red River Technology is a leading provider of technology solutions and managed services with mission-critical expertise in security, networking, data center, collaboration, mobility, and cloud applications. • Through the partnership with Cerberus, Red River will continue to grow services to federal government agencies, SLED, and commercial businesses. Gryphon Investors Combines Three ServiceNow Businesses to Form Stand-alone Platformd Announced May 27, 2021 Application Partner • Gryphon acquired a majority stake in the ServiceNow division of Highmetric from the Acacia Group, and simultaneously acquired Fishbone Analytics Inc. -
Enabling Efficient Use of UPC and Openshmem PGAS Models on GPU Clusters
Enabling Efficient Use of UPC and OpenSHMEM PGAS Models on GPU Clusters Presented at GTC ’15 Presented by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: [email protected] hCp://www.cse.ohio-state.edu/~panda Accelerator Era GTC ’15 • Accelerators are becominG common in hiGh-end system architectures Top 100 – Nov 2014 (28% use Accelerators) 57% use NVIDIA GPUs 57% 28% • IncreasinG number of workloads are beinG ported to take advantage of NVIDIA GPUs • As they scale to larGe GPU clusters with hiGh compute density – hiGher the synchronizaon and communicaon overheads – hiGher the penalty • CriPcal to minimize these overheads to achieve maximum performance 3 Parallel ProGramminG Models Overview GTC ’15 P1 P2 P3 P1 P2 P3 P1 P2 P3 LoGical shared memory Shared Memory Memory Memory Memory Memory Memory Memory Shared Memory Model Distributed Memory Model ParPPoned Global Address Space (PGAS) DSM MPI (Message PassinG Interface) Global Arrays, UPC, Chapel, X10, CAF, … • ProGramminG models provide abstract machine models • Models can be mapped on different types of systems - e.G. Distributed Shared Memory (DSM), MPI within a node, etc. • Each model has strenGths and drawbacks - suite different problems or applicaons 4 Outline GTC ’15 • Overview of PGAS models (UPC and OpenSHMEM) • Limitaons in PGAS models for GPU compuPnG • Proposed DesiGns and Alternaves • Performance Evaluaon • ExploiPnG GPUDirect RDMA 5 ParPPoned Global Address Space (PGAS) Models GTC ’15 • PGAS models, an aracPve alternave to tradiPonal message passinG - Simple -
Automatic Handling of Global Variables for Multi-Threaded MPI Programs
Automatic Handling of Global Variables for Multi-threaded MPI Programs Gengbin Zheng, Stas Negara, Celso L. Mendes, Laxmikant V. Kale´ Eduardo R. Rodrigues Department of Computer Science Institute of Informatics University of Illinois at Urbana-Champaign Federal University of Rio Grande do Sul Urbana, IL 61801, USA Porto Alegre, Brazil fgzheng,snegara2,cmendes,[email protected] [email protected] Abstract— necessary to privatize global and static variables to ensure Hybrid programming models, such as MPI combined with the thread-safety of an application. threads, are one of the most efficient ways to write parallel ap- In high-performance computing, one way to exploit multi- plications for current machines comprising multi-socket/multi- core nodes and an interconnection network. Global variables in core platforms is to adopt hybrid programming models that legacy MPI applications, however, present a challenge because combine MPI with threads, where different programming they may be accessed by multiple MPI threads simultaneously. models are used to address issues such as multiple threads Thus, transforming legacy MPI applications to be thread-safe per core, decreasing amount of memory per core, load in order to exploit multi-core architectures requires proper imbalance, etc. When porting legacy MPI applications to handling of global variables. In this paper, we present three approaches to eliminate these hybrid models that involve multiple threads, thread- global variables to ensure thread-safety for an MPI program. safety of the applications needs to be addressed again. These approaches include: (a) a compiler-based refactoring Global variables cause no problem with traditional MPI technique, using a Photran-based tool as an example, which implementations, since each process image contains a sep- automates the source-to-source transformation for programs arate instance of the variable. -
Overview of the Global Arrays Parallel Software Development Toolkit: Introduction to Global Address Space Programming Models
Overview of the Global Arrays Parallel Software Development Toolkit: Introduction to Global Address Space Programming Models P. Saddayappan2, Bruce Palmer1, Manojkumar Krishnan1, Sriram Krishnamoorthy1, Abhinav Vishnu1, Daniel Chavarría1, Patrick Nichols1, Jeff Daily1 1Pacific Northwest National Laboratory 2Ohio State University Outline of the Tutorial ! " Parallel programming models ! " Global Arrays (GA) programming model ! " GA Operations ! " Writing, compiling and running GA programs ! " Basic, intermediate, and advanced calls ! " With C and Fortran examples ! " GA Hands-on session 2 Performance vs. Abstraction and Generality Domain Specific “Holy Grail” Systems GA CAF MPI OpenMP Scalability Autoparallelized C/Fortran90 Generality 3 Parallel Programming Models ! " Single Threaded ! " Data Parallel, e.g. HPF ! " Multiple Processes ! " Partitioned-Local Data Access ! " MPI ! " Uniform-Global-Shared Data Access ! " OpenMP ! " Partitioned-Global-Shared Data Access ! " Co-Array Fortran ! " Uniform-Global-Shared + Partitioned Data Access ! " UPC, Global Arrays, X10 4 High Performance Fortran ! " Single-threaded view of computation ! " Data parallelism and parallel loops ! " User-specified data distributions for arrays ! " Compiler transforms HPF program to SPMD program ! " Communication optimization critical to performance ! " Programmer may not be conscious of communication implications of parallel program HPF$ Independent HPF$ Independent s=s+1 DO I = 1,N DO I = 1,N A(1:100) = B(0:99)+B(2:101) HPF$ Independent HPF$ Independent HPF$ Independent -
Exascale Computing Project -- Software
Exascale Computing Project -- Software Paul Messina, ECP Director Stephen Lee, ECP Deputy Director ASCAC Meeting, Arlington, VA Crystal City Marriott April 19, 2017 www.ExascaleProject.org ECP scope and goals Develop applications Partner with vendors to tackle a broad to develop computer spectrum of mission Support architectures that critical problems national security support exascale of unprecedented applications complexity Develop a software Train a next-generation Contribute to the stack that is both workforce of economic exascale-capable and computational competitiveness usable on industrial & scientists, engineers, of the nation academic scale and computer systems, in collaboration scientists with vendors 2 Exascale Computing Project, www.exascaleproject.org ECP has formulated a holistic approach that uses co- design and integration to achieve capable exascale Application Software Hardware Exascale Development Technology Technology Systems Science and Scalable and Hardware Integrated mission productive technology exascale applications software elements supercomputers Correctness Visualization Data Analysis Applicationsstack Co-Design Programming models, Math libraries and development environment, Tools Frameworks and runtimes System Software, resource Workflows Resilience management threading, Data Memory scheduling, monitoring, and management and Burst control I/O and file buffer system Node OS, runtimes Hardware interface ECP’s work encompasses applications, system software, hardware technologies and architectures, and workforce -
The Global Arrays User Manual
The Global Arrays User Manual Manojkumar Krishnan, Bruce Palmer, Abhinav Vishnu, Sriram Krishnamoorthy, Jeff Daily, Daniel Chavarria February 8, 2012 1 This document is intended to be used with version 5.1 of Global Arrays (Pacific Northwest National Laboratory Technical Report Number PNNL-13130) DISCLAIMER This material was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Gov- ernment nor the United States Department of Energy, nor Battelle, nor any of their employees, MAKES ANY WARRANTY, EXPRESS OR IMPLIED, OR ASSUMES ANY LEGAL LIABILITY OR RESPONSIBILITY FOR THE ACCURACY, COMPLETENESS, OR USEFULNESS OF ANY INFORMA- TION, APPARATUS, PRODUCT, SOFTWARE, OR PROCESS DISCLOSED, OR REPRESENTS THAT ITS USE WOULD NOT INFRINGE PRIVATELY OWNED RIGHTS. ACKNOWLEDGMENT This software and its documentation were pro- duced with United States Government support under Contract Number DE- AC05-76RLO-1830 awarded by the United States Department of Energy. The United States Government retains a paid-up non-exclusive, irrevocable world- wide license to reproduce, prepare derivative works, perform publicly and display publicly by or for the US Government, including the right to distribute to other US Government contractors. December, 2011 Contents 1 Introduction 3 1.1 Overview . .3 1.2 Basic Functionality . .4 1.3 Programming Model . .5 1.4 Application Guidelines . .6 1.4.1 When to use GA: . .6 1.4.2 When not to use GA . .6 2 Writing, Building and Running GA Programs 7 2.1 Platform and Library Dependencies . .7 2.1.1 Supported Platforms . .7 2.1.2 Selection of the communication network for ARMCI . -
Blockchain: the Chain of Trust and Its Potential to Transform Healthcare – Our Point of View
Blockchain: The Chain of Trust and its Potential to Transform Healthcare – Our Point of View Blockchain: The Chain of Trust and its Potential to Transform Healthcare – Our Point of View “Use of Blockchain in Health IT and Health-related Research” Ideation Challenge Office of the National Coordinator for Health Information Technology IBM Global Business Services Public Sector Team 6710 Rockledge Dr., Bethesda, MD 20817 August 8, 2016 This white paper is intended to assist the Government with market research/health related research only. It is not intended as a commitment that IBM will further develop blockchain technology, nor that blockchain technology will address specific health related requirements. IBM is the sole author, creator, and owner of the submission, or has the right to use the information herein. The submission, to the best of IBM’s knowledge, does not and will not violate or infringe upon the intellectual property rights, privacy rights, publicity rights, or other legal rights of any third party. Blockchain: The Chain of Trust and its Potential to Transform Healthcare – Our Point of View Executive Summary As it has for centuries, commerce relies on two things: trust and verified identity. Put more simply: What is being exchanged, and who is confirming it? Yet commerce that was once direct and in-person is today conducted mostly online and requires intermediaries such as banks, governments, or other central authorities to verify the identity of each party and establish the needed trust between them. And whenever there are intermediaries there are inefficiencies – decreased speed, increased cost, and sometimes even fraud. Privacy, too, can be affected, and the centralized information stores of the intermediaries can be vulnerable to attacks. -
Automated Fortran–C++ Bindings for Large-Scale Scientific Applications
Automated Fortran–C++ Bindings for Large-Scale Scientific Applications Seth R Johnson HPC Methods for Nuclear Applications Group Nuclear Energy and Fuel Cycle Division Oak Ridge National Laboratory ORNL is managed by UT–Battelle, LLC for the US Department of Energy github.com/swig-fortran Overview • Introduction • Tools • SWIG+Fortran • Strategies • Example libraries 2 Introduction 3 How did I get involved? • SCALE (1969–present): Fortran/C++ • VERA: multiphysics, C++/Fortran • MPACT: hand-wrapped calls to C++ Trilinos 4 Project background • Exascale Computing Project: at inception, many scientific app codes were primarily Fortran • Numerical/scientific libraries are primarily C/C++ • Expose Trilinos solver library to Fortran app developers: ForTrilinos product 5 ECP: more exascale, less Fortran Higher-level { }Fortran ECP application codes over time (credit: Tom Evans) 6 Motivation • C++ library developers: expand user base, more F opportunities for development and follow-on funding • Fortran scientific app developers: use newly exposed algorithms and tools for your code C • Multiphysics project integration: in-memory coupling of C++ physics code to Fortran physics code • Transitioning application teams: bite-size migration from Fortran to C++ C++ 7 Tools 8 Wrapper “report card” • Portability: Does it use standardized interoperability? • Reusability: How much manual duplication needed for new interfaces? • Capability: Does the Fortran interface have parity with the C++? • Maintainability: Do changes to the C++ code automatically update