IBM Power Systems HPC Solutions Brief

Total Page:16

File Type:pdf, Size:1020Kb

IBM Power Systems HPC Solutions Brief IBM Power Systems August 2017 Solution Brief IBM Power Systems HPC solutions Faster time to insight with the high-performance, comprehensive portfolio designed for your workflows Architecting superior high performance computing (HPC) clusters Highlights requires a holistic approach that responds to performance at every level of the deployment. • IBM Power Systems HPC solutions are comprehensive clusters built for faster time ® to insight IBM high performance computing solutions, built with IBM Power Systems™, IBM® Spectrum™ Computing, IBM Spectrum Storage™, and • IBM POWER8 offers the performance, IBM Software technologies, provide an integrated platform to optimize cache and memory bandwidth to drive the best results from high performance your HPC workflows, resulting in faster time to insights and value. computing and high performance data analytics applications The industry’s most comprehensive, • IBM HPC software is designed to data-centric HPC solutions capitalize on the technical features Only IBM provides a total HPC solution, including optimized, of IBM Power Systems clusters best-of-breed components at all levels of the system stack. Comprehensive solutions ensure: • Rapid deployment • Clusters that deliver value immediately after acceptance IBM HPC solutions are built for data-centric computing, and delivered with integration expertise targeting performance optimization at the workflow level. Data-centric design minimizes data motion, enables compute capabilities across the system stack, and provides a modular, scalable architecture that is optimized for HPC. Data-centric HPC and CORAL Data-centric design was a primary reason the Department of Energy selected IBM for the CORAL deployment. Summit (Oak Ridge National Labora- tory) and Sierra (Lawrence Livermore National Laboratory) will become some of the largest, most groundbreaking, and most utilized installations in the world. Bring that same data-centric design to your HPC cluster by partnering with IBM. IBM Power Systems August 2017 Solution Brief A total HPC solution Beyond the server: Superior data IBM HPC solutions offer industry-leading innovation management and storage within and across the system stack. From servers, accel- A pillar of data-centric system innovation, IBM Spectrum erators, network fabric and storage, to compilers and Scale™ software-defined storage offers scalable, high- development tools, cluster management software and performance, and reliable unified storage for files and data cloud integration points, solution components are designed objects. It does so with parallel performance for HPC users. for superior integration and total workflow performance optimization. This comprehensive scope is unique among Implementing the unique advantages of IBM Spectrum Scale competitive technology providers and reflects IBM’s deep (formerly GPFS), IBM Elastic Storage Server is a storage expertise in data-centric system design and integration. Only solution that provides persistent performance at any scale. IBM can deliver a data-centric system optimized for your It ensures fast access and availability of the right data, at the right workflows, realizing the fastest time to insight and value. time, across clusters. Built in management and administration tools ensure ease of deployment and continual optimization. Right tie to IBM Spectrum Computing the cloud InfiniBand, Ethernet rt IBM Power Systems RHEL LE Accelerators Ubuntu egration, suppo t IBM Storage IBM FlashSystem™ IBM Elastic Storage Server Design, in IBM Spectrum Scale IBM tape solutions HPSS Enterprise storage IBM + Partner Middleware/Dev. Software IBM XL ESSL IBM Spectrum MPI PGI Compilers NVIDIA CUDA Parallel Performance To olkit LLVM, GCC OpenMP, OpenACC Figure 1: IBM HPC portfolio 2 IBM Power Systems August 2017 Solution Brief IBM POWER8: Designed for the New IBM Power Systems LC nodes intersection of high performance for HPC and HPDA computing and high performance The IBM Power Systems LC servers are designed for data analytics HPC workloads. They allow you to: The IBM POWER8® processor delivers industry-leading • Realize incredible speedups in application performance for HPC and high performance data analytics performance with accelerators (HPDA) applications, with multi-threading designed for fast execution of analytics algorithms (eight threads per core), • Deploy a processor architecture designed for multi-level cache for continuous data load and fast response HPC performance (including an L4 cache), and a large, high-bandwidth • Benefit from ecosystem innovation from the memory workspace to maximize throughput for data- OpenPOWER Foundation intensive applications. System Processor Memory Storage Acceleration HPC use cases IBM Power Systems 2x POWER8 with Up to 1TB 2x 2.5” drives 4x NVIDIA Tesla P100 Built for the next wave S822LC for High Perfor- NVLink CPUs 230 GB/s bandwidth (HDD or SSD) with NVLink GPU ac- of GPU acceleration mance Computing celerators 10 cores each, 2.86- NVMe for ultra-fast I/O 3.25Ghz IBM Power Systems 2x POWER8 CPUs Up to 1 TB 2x 2.5” drives Optional CAPI- Built for CPU S822LC 10 cores each, 230 GB/s bandwidth (HDD or SSD) attached accelerators performance 2.9-3.3 GHz NVMe for ultra-fast I/O Optional Tesla K80 IBM Power Systems 1x POWER8 CPU, Up to 1 TB 14x 3.5” drives Optional CAPI- Optimized for S812LC 10 cores each, 115 GB/s bandwidth (84TB, HDD, SSD) attached accelerators Hadoop, Spark 2.9-3.3 GHz Table 1: Technical details for three Power Systems offerings Cores • 8-12 cores (SMT8) Ac SMP L • Enhanced prefetching Core Core Core celerat Core Core Core ink L2 L2 L2 or L2 L2 L2 s Caches s 8M L3 Mem. • 64K data cache (L1) rl Region • 512 KB SRAM L2 / core Ct L3 cache & chip interconnect Ct SMP L • 96 MB eDRAM shared L3 rl . • Up to 128 MB eDram L4 (off-chip) Mem. L2 L2 L2 inks P L2 L2 L2 Memory Core Core Core CIe Core Core Core • Dual memory controllers • Up to 230 GB/sec sustained bandwidth Continuous Massive Superior parallel Large-scale data load IO bandwidth processing memory processing Figure 2: The POWER8 processor 3 IBM Power Systems August 2017 Solution Brief Leadership in HPC Compelling application performance versus application performance competing server architectures: • CFD results 40 percent faster on OpenFOAM on IBM IBM HPC solutions are built for better HPC. They allow Power System S822LC compared to competing servers you to analyze faster, simulate better and process more through these attributes: Architectural advantages matched to HPC simpleFoam 1 Node applications, such as memory bandwidth: (smaller values preferred) • 60-79 percent greater memory bandwidth compared to competing servers 2000 es 15000 m i nt 10000 200 +79% +60% Ru 5000 180 0 160 189 0.E+00 5.E+07 1.E+08 ) c Grid points 140 se / B 120 IBM Power S822LC Bull R424 E4 (G iad 100 • Results are based on IBM internal testing of systems running OpenFOAM Tr 118.2 version 2.3.0 code benchmarked on POWER8 systems. Individual results 80 105.4 will vary depending on individual workloads, configurations and conditions. AM • IBM Power Systems S822LC, POWER8, 3.5 GHz, 512 GB memory, 2x 10 core processors/4 threads per core. Job size 128GB memory per socket. 60 • BULL R424-E4, Intel Xeon E5-2680v3, 2.3 GHz, 256 GB memory, 2x 10 TRE S core processors/1 thread per core. Job size 128GB memory per socket. 40 20 Figure 4: OpenFOAM simpleFoam 1 node 0 IBM S822LC Intel Server Intel Server 20c/160t, System System 32 DIMMs E5-2690 v3 E5-2690 v3 24c/48t 2DPC 24c/48t • IBM Power Systems S822LC results are based on IBM internal measurements of STREAM Triad; 20 cores / 20 of 160 threads active. POWER8; 3.5GHz, up to 1TB memory. • Intel Xeon data is based on published data of Intel Server Systems R2208WTTYS running STREAM Triad; 24 cores / 24 of 48 threads active, E5-2690 v3; 2.3GHz Figure 3: STREAM Triad 4 IBM Power Systems August 2017 Solution Brief Compelling throughput on GPU computing applications and workloads: • Up to a 7.3X improvement in NAMD performance 8 7.3X by adding NVIDIA Tesla P100 GPUs to the workload 7 • Up to 2.7X the throughput of Kinetica Filter-by-Location queries through POWER8, Tesla P100, and NVIDIA e 6 NVLink, as compared to a competing server 5 manc or • f Up to 2.91X the realized CPU:GPU bandwidth of x86 r 4 servers featuring PCI-E x16 3.0, unleashing custom code pe ve i 3 Additional Application Proof Points available at: at l e https://www.ibm.com/developerworks/linux/perfcol/ R 2 1 ) 210 2.7X 0 180 1,000s ( STMV 150 S822LC 20c/2.86GHz /1x Te sla P100 GPUs r Hour 120 Pe S822LC/20c/2.86GHz es ri 90 ue • Results are based on IBM internal testing of systems running Q 60 NAMD version 2.11 STMV code benchmarked on POWER8 systems with NVIDIA Tesla P100 GPUs Individual results will vary depending on individual 30 workloads, configurations and conditions. • IBM Power Systems S822LC; 20 cores / 160 threads, POWER8 with NVLink; 2.86GHz, 256GB memory Number of 0 • IBM Power Systems S822LC; 20 cores / 160 threads, System Type POWER8 with NVLink, 2.86GHz, 256GB memory, 2 Tesla P100 GPUs x86 Competitor, 20c/2.4GHz/4 Te sla P100 GPUs S822LC for HPC, 20c/2.86GHz/4 Te sla P100 GPUs Figure 6: NAMD performance 40 ) 2.9X c • Results are based on IBM internal testing of Kinetica Filter-by-Location se / query with 280,000,000 records B 32 Individual results will vary depending on individual (G 34.1 workloads, configurations and conditions. • IBM Power Systems S822LC for HPC; 20 cores / 160 threads, 24 POWER8 with NVLink, 2.86GHz, 1024GB, 4 Te sla P100 with NVLink GPUs • x86 Competitor, 20 cores / 40 threads, Xeon E5-2640 v4; 2.4GHz, GB, 4 Tesla P100 PCI-E GPUs 16 512 A H2D Bandwidth UD C 8 11.7 0 Figure 5: Kinetica accelerated database performance IBM S822LC x86 Competitor POWER8 E5-2640 v4 with NVLink Te sla K80 Device 0 Tesla P100 • IBM Power Systems S822LC results are based on IBM internal measurements of CUDA H2D BW Test; 20 cores / POWER8 with NVLink; 2.86GHz, up to 256GB memory, 1 Tesla P100 • Intel Xeon data is based on IBM internal testing ; 20 cores / 40 threads active, Xeon E5-2640 v4 2.4GHz, Tesla K80 GPU Device 0.
Recommended publications
  • Automated Analysis of Speculation Windows in Spectre Attacks
    Barbara Gigerl Automated Analysis of Speculation Windows in Spectre Attacks MASTER'S THESIS to achieve the university degree of Diplom-Ingenieurin Master's degree programme: Software Engineering and Management submitted to Graz University of Technology Supervisor: Daniel Gruss Institute for Applied Information Processing and Communication Graz, May 2019 i AFFIDAVIT I declare that I have authored this thesis independently, that I have not used other than the declared sources/resources, and that I have explicitly indicated all material which has been quoted either literally or by content from the sources used. The text document uploaded to TUGRAZonline is identical to the present master's thesis. Date Signature EIDESSTATTLICHE ERKLARUNG¨ Ich erkl¨arean Eides statt, dass ich die vorliegende Arbeit selbstst¨andig verfasst, andere als die angegebenen Quellen/Hilfsmittel nicht benutzt, und die den benutzten Quellen w¨ortlich und inhaltlich entnommenen Stellen als solche kenntlich gemacht habe. Das in TUGRAZonline hochgeladene Textdokument ist mit der vorliegenden Masterarbeit identisch. Datum Unterschrift Abstract Speculative execution is a feature integrated into most modern CPUs. Although intro- duced as a way to enhance the performance of processors, the release of Spectre attacks showed that it is a significant security risk. Since CPUs from various vendors, includ- ing Intel, AMD, ARM, and IBM, implement speculative execution, all different kinds of devices are affected by Spectre attacks, for example, desktop PCs and smartphones. Spectre attacks exploit the branch prediction mechanisms of the CPU and then use a cache covert channel to leak secret data. Several attack variants have been discovered since the release, including Spectre-PHT which targets the Pattern History Table of the CPU.
    [Show full text]
  • IBM Software Support
    Version 5.0.0 What’s New . Phone Contacts We have eliminated the phone number section from this document and direct you to IBM Planetwide for contact information. I would appreciate your feedback on what you like and what you think should be improved about this document. e-mail me at mcknight@ us.ibm.com Cover design by Rick Nelson, IBM Raleigh 2 June, 2014 Contents What’s New ................................................................................... 2 Overview of IBM Support .............................................................. 4 IBM Commitment......................................................................... 4 Software Support Organization ...................................................... 4 Software Support Portfolio ................................................................. 5 IBM Software Support ...................................................... 5 Support Foundation .......................................................................... 5 Passport Advantage & IBM Software Maintenance ................................ 6 System z (S/390) ......................................................................... 7 Support Line and SoftwareXcel ..................................................... 7 IBM Selected Support Offering ....................................................... 8 Premium Support ............................................................................ 8 Enhanced Technical Support ........................................................ 8 IBM Software Accelerated Value
    [Show full text]
  • IBM Flashsystem A9000 Product Guide (Version 12.3)
    Front cover IBM FlashSystem A9000 Product Guide (Updated for Version 12.3.2) Bert Dufrasne Stephen Solewin Francesco Anderloni Roger Eriksson Lisa Martinez Product Guide IBM FlashSystem A9000 Product Guide This IBM® Redbooks® Product Guide is an overview of the main characteristics, features, and technologies that are used in IBM FlashSystem® A9000 Models 425 and 25U, with IBM FlashSystem A9000 Software V12.3.2. IBM FlashSystem A9000 storage system uses the IBM FlashCore® technology to help realize higher capacity and improved response times over disk-based systems and other competing flash and solid-state drive (SSD)-based storage. FlashSystem A9000 offers world-class software features that are built with IBM Spectrum™ Accelerate. The extreme performance of IBM FlashCore technology with a grid architecture and comprehensive data reduction creates one powerful solution. Whether you are a service provider who requires highly efficient management or an enterprise that is implementing cloud on a budget, FlashSystem A9000 provides consistent and predictable microsecond response times and the simplicity that you need. As a cloud optimized solution, FlashSystem A9000 suits the requirements of public and private cloud providers who require features, such as inline data deduplication, multi-tenancy, and quality of service. It also uses powerful software-defined storage capabilities from IBM Spectrum Accelerate, such as Hyper-Scale technology, VMware, and storage container integration. FlashSystem A9000 is a modular system that consists of three grid controllers and a flash enclosure. An external view of the Model 425 is shown in Figure 1. Figure 1 IBM FlashSystem A9000 Model 425 © Copyright IBM Corp. 2017, 2018. All rights reserved.
    [Show full text]
  • Faster Oracle Performance with IBM Flashsystem 2 Faster Oracle Performance with IBM Flashsystem
    IBM Systems and Technology Group May 2013 Thought Leadership White Paper Faster Oracle performance with IBM FlashSystem 2 Faster Oracle performance with IBM FlashSystem Executive summary The result is a massive performance gap, felt most painfully This whitepaper discusses methods for improving Oracle® by database servers, which typically carry out far more I/O database performance using flash storage to accelerate the most transactions than other systems. Super fast processors and resource-intensive data that slows performance across the massive amounts of bandwidth are often wasted as storage board. devices take several milliseconds just to access the requested data. To this end, it discusses methods for identifying I/O performance bottlenecks, and it points out components that are the best candidates for migration to a flash storage appliance. An in-depth explanation of flash technology and possible implementations are also included. The problem of I/O wait time Often, additional processing power alone will do little or nothing to improve Oracle performance. This is because the processor, no matter how fast, finds itself constantly waiting on mechanical storage devices for its data. While every other component in the “data chain” moves in terms of computation times and the raw speed of electricity through a circuit, hard drives move mechanically, relying on physical movement around a magnetic platter to access information. In the last 20 years, processor speeds have increased at a Figure 1: Comparing processor and storage performance improvements geometric rate. At the same time, however, conventional storage access times have only improved marginally (see Figure 1). IBM Systems and Technololgy Group 3 When servers wait on storage, users wait on servers.
    [Show full text]
  • IBM DS8880: Hybrid Cloud Integration with Transparent Cloud Tiering
    Accelerate with IBM Storage: IBM DS8880: Hybrid cloud integration with Transparent Cloud Tiering Craig Gordon Consulting I/T Specialist IBM Washington Systems Center [email protected] © Copyright IBM Corporation 2018. Washington Systems Center - Storage Accelerate with IBM Storage Webinars The Free IBM Storage Technical Webinar Series Continues in 2018... Washington Systems Center – Storage experts cover a variety of technical topics. Audience: Clients who have or are considering acquiring IBM Storage solutions. Business Partners and IBMers are also welcome. To automatically receive announcements of upcoming Accelerate with IBM Storage webinars, Clients, Business Partners and IBMers are welcome to send an email request to [email protected]. Located in the Accelerate with IBM Storage Blog: https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en Also, check out the WSC YouTube Channel here: https://www.youtube.com/playlist?list=PLSdmGMn4Aud-gKUBCR8K0kscCiF6E6ZYD&disable_polymer=true 2018 Webinars: January 9 – DS8880 Easy Tier January 17 – Start 2018 Fast! What's New for Spectrum Scale V5 and ESS February 8 - VersaStack - Solutions For Fast Deployments February 16 - TS7700 R4.1 Phase 2 GUI with Live Demo February 22 - DS8880 Transparent Cloud Tiering Live Demo March 1 - Spectrum Scale/ESS Application Case Study Register Here: https://ibm2.webex.com/ibm2/onstage/g.php?MTID=e25146b6c1207cb081f4392087fb6f73a March 7 - Spectrum Storage Management, Control, Insights, Foundation; what’s the difference? Register Here: https://ibm2.webex.com/ibm2/onstage/g.php?MTID=e3165dfc6c698c8fcb83132c95ae6dfe7 March 15 - IBM FlashSystem A9000/R and SVC Configuration Best Practices Register Here: https://ibm2.webex.com/ibm2/onstage/g.php?MTID=e87d423c5cccbdefbbb61850d54f70f4b © Copyright IBM Corporation 2018.
    [Show full text]
  • Defeating Invisible Enemies:Firmware Based
    Defeating Invisible Enemies: Firmware Based Security in OpenPOWER Systems — Linux Security Summit 2017 — George Wilson IBM Linux Technology Center Linux Security Summit / Defeating Invisible Enemies / September 14, 2017 / © 2017 IBM Corporation Agenda Introduction The Case for Firmware Security What OpenPOWER Is Trusted Computing in OpenPOWER Secure Boot in OpenPOWER Current Status of Work Benefits of Open Source Software Conclusion Linux Security Summit / Defeating Invisible Enemies / September 14, 2017 / © 2017 IBM Corporation 2 Introduction Linux Security Summit / Defeating Invisible Enemies / September 14, 2017 / © 2017 IBM Corporation 3 Disclaimer These slides represent my views, not necessarily IBM’s All design points disclosed herein are subject to finalization and upstream acceptance The features described may not ultimately exist or take the described form in a product Linux Security Summit / Defeating Invisible Enemies / September 14, 2017 / © 2017 IBM Corporation 4 Background The PowerPC CPU has been around since 1990 Introduced in the RS/6000 line Usage presently spans embedded to server IBM PowerPC servers traditionally shipped with the PowerVM hypervisor and ran AIX and, later, Linux in LPARs In 2013, IBM decided to open up the server architecture: OpenPOWER OpenPOWER runs open source firmware and the KVM hypervisor with Linux guests Firmware and software designed and developed by the IBM Linux Technology Center “OpenPOWER needs secure and trusted boot!” Linux Security Summit / Defeating Invisible Enemies / September 14, 2017 / © 2017 IBM Corporation 5 The Case for Firmware Security Linux Security Summit / Defeating Invisible Enemies / September 14, 2017 / © 2017 IBM Corporation 6 Leaks Wikileaks Vault 7 Year 0 Dump NSA ANT Catalog Linux Security Summit / Defeating Invisible Enemies / September 14, 2017 / © 2017 IBM Corporation 7 Industry Surveys UEFI Firmware Rootkits: Myths and Reality – Matrosov Firmware Is the New Black – Analyzing Past Three Years of BIOS/UEFI Security Vulnerabilities – Branco et al.
    [Show full text]
  • IBM Power® Systems for SAS® Empowers Advanced Analytics Harry Seifert, Laurent Montaron, IBM Corporation
    Paper 4695-2020 IBM Power® Systems for SAS® Empowers Advanced Analytics Harry Seifert, Laurent Montaron, IBM Corporation ABSTRACT For over 40+ years of partnership between IBM and SAS®, clients have been benefiting from the added value brought by IBM’s infrastructure platforms to deploy SAS analytics, and now SAS Viya’s evolution of modern analytics. IBM Power® Systems and IBM Storage empower SAS environments with infrastructure that does not make tradeoffs among performance, cost, and reliability. The unified solution stack, comprising server, storage, and services, reduces the compute time, controls costs, and maximizes resilience of SAS environment with ultra-high bandwidth and highest availability. INTRODUCTION We will explore how to deploy SAS on IBM Power Systems platforms and unleash the full potential of the infrastructure, to reduce deployment risk, maximize flexibility and accelerate insights. We will start by reviewing IBM and SAS’s technology relationship and the current state of SAS products on IBM Power Systems. Then we will look at some of the infrastructure options to deploy SAS 9.4 on IBM Power Systems and IBM Storage, while maximizing resiliency & throughput by leveraging best practices. Next, we will look at SAS Viya, which introduces changes to the underlying infrastructure requirements while remaining able to be deployed alongside a traditional SAS 9.4 operation. We’ll explore the various deployment modes available. Finally, we’ll look at tuning practices and reference materials available for a deeper dive in deploying SAS on IBM platforms. SAS: 40 YEARS OF PARTNERSHIP WITH IBM IBM and SAS have been partners since the founding of SAS.
    [Show full text]
  • Best Practices for IBM DS8000 and IBM Z/OS Hyperswap with IBM Copy Services Manager
    Front cover Best Practices for IBM DS8000 and IBM z/OS HyperSwap with IBM Copy Services Manager Thomas Luther Alexander Warmuth Marcelo Takakura Redbooks IBM Redbooks Best Practices for IBM DS8000 and IBM z/OS HyperSwap with IBM Copy Services Manager May 2019 SG24-8431-00 Note: Before using this information and the product it supports, read the information in “Notices” on page vii. First Edition (May 2019) This edition applies to IBM Copy Services Manager (CSM) V6.2.3 with IBM DS8000 Version 8.5. © Copyright International Business Machines Corporation 2019. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . vii Trademarks . viii Preface . ix Authors. ix Now you can become a published author, too! . .x Comments welcome. .x Stay connected to IBM Redbooks . .x Part 1. Introduction and planning . 1 Chapter 1. Introduction. 3 1.1 IBM Copy Services Manager overview . 4 1.2 CSM licenses . 5 1.2.1 CSM licenses for z/OS platforms . 5 1.2.2 CSM licenses for distributed server platforms. 6 1.3 z/OS HyperSwap overview . 7 1.3.1 z/OS HyperSwap: Not so basic anymore . 8 1.3.2 CSM sessions that support HyperSwap . 9 1.3.3 z/OS HyperSwap functions . 9 1.3.4 HyperSwap sequence. 11 1.3.5 Planned and unplanned HyperSwap . 12 1.4 CSM and HyperSwap communication flow . 13 1.5 GDPS Metro solutions. 14 1.6 IBM Resiliency Orchestration and CSM . 15 Chapter 2. IBM Copy Services Manager and IBM z/OS HyperSwap implementation topologies .
    [Show full text]
  • Michael Mcgeein, CPA, CA September 2018
    IBM Cognos Analytics Data Prep. and Modelling Working with your data Michael McGeein, CPA, CA September 2018 IBM Business Analytics :: IBM Confidential :: © 2016 IBM Corporation Notices and disclaimers Copyright © 2018 by International Business Machines Corporation (IBM). as illustrations of how those customers have used IBM products and No part of this document may be reproduced or transmitted in any form without the results they may have achieved. Actual performance, cost, savings or other written permission from IBM. results in other operating environments may vary. U.S. Government Users Restricted Rights — use, duplication or disclosure References in this document to IBM products, programs, or services does not restricted by GSA ADP Schedule Contract with IBM. imply that IBM intends to make such products, programs or services available in all countries in which IBM operates or does business. Information in these presentations (including information relating to products that have not yet been announced by IBM) has been reviewed for accuracy as Workshops, sessions and associated materials may have been prepared by of the date of initial publication and could include unintentional technical or independent session speakers, and do not necessarily reflect the typographical errors. IBM shall have no responsibility to update this information. views of IBM. All materials and discussions are provided for informational This document is distributed “as is” without any warranty, either express or purposes only, and are neither intended to, nor shall constitute legal or other implied. In no event shall IBM be liable for any damage arising from the use of guidance or advice to any individual participant or their specific situation.
    [Show full text]
  • CSI Web Server for Linux
    INSTRUCTION CSI Web Server for Linux Installation Guide Revision: 3/18 MANUAL Copyright © 2006- 2018 Campbell Scientific, Inc. License for Use This software is protected by United States copyright law and international copyright treaty provisions. The installation and use of this software constitutes an agreement to abide by the provisions of this license agreement. Campbell Scientific grants you a non-exclusive license to use this software in accordance with the following: (1) The purchase of this software allows you to install and use a single instance of the software on one physical computer or one virtual machine only. (2) This software cannot be loaded on a network server for the purposes of distribution or for access to the software by multiple operators. If the software can be used from any computer other than the computer on which it is installed, you must license a copy of the software for each additional computer from which the software may be accessed. (3) If this copy of the software is an upgrade from a previous version, you must possess a valid license for the earlier version of software. You may continue to use the earlier copy of software only if the upgrade copy and earlier version are installed and used on the same computer. The earlier version of software may not be installed and used on a separate computer or transferred to another party. (4) This software package is licensed as a single product. Its component parts may not be separated for use on more than one computer. (5) You may make one (1) backup copy of this software onto media similar to the original distribution, to protect your investment in the software in case of damage or loss.
    [Show full text]
  • Foundation Overview February 2014
    OpenPOWER Overview May 2015 Keith Brown Director, IBM Systems Technical Strategy & Product Security [email protected] http://openpowerfoundation.org/ © 2015 OpenPOWER Foundation What is the OpenPOWER Ecosystem? Cloud Software Existing ISV community of 800+ Standard Operating Open Environment Source All major Linux distros (System Mgmt) Software Communities Operating Open sourced Power8 System / KVM firmware stack New OSS Firmware OpenPOWER Resources for porting and Firmware Community optimizing on Hardware OpenPOWER OpenPOWERFoundation.org Technology 2 © 2015 OpenPOWER Foundation A Fast Start for OpenPOWER! The year • Collaborative solutions, standards, and reference designs available • Independent members solutions and systems ahead • Sector growth in technical computing and cloud • Global growth with increasing depth in all layers • Broad adoption across hardware, software, and end users 3 © 2015 OpenPOWER Foundation Fueling an Open Development Community 4 © 2015 OpenPOWER Foundation Critical workloads run on Linux on Power Web, Java Apps and Infrastructure Analytics & Research HPC applications for Life Sciences • Highly threaded • Compute intensive • Throughput oriented • High memory bandwidth • Scale out capable • Floating point • High quality of service • High I/O rates Business Applications Database • High quality of service • Handle peak workloads • Scalability • Scalability • Flexible infrastructure • High quality of service • Large memory footprint • Resiliency and security 5 © 2015 OpenPOWER Foundation IBM, Mellanox, and NVIDIA
    [Show full text]
  • Mahasen: Distributed Storage Resource Broker K
    Mahasen: Distributed Storage Resource Broker K. Perera, T. Kishanthan, H. Perera, D. Madola, Malaka Walpola, Srinath Perera To cite this version: K. Perera, T. Kishanthan, H. Perera, D. Madola, Malaka Walpola, et al.. Mahasen: Distributed Storage Resource Broker. 10th International Conference on Network and Parallel Computing (NPC), Sep 2013, Guiyang, China. pp.380-392, 10.1007/978-3-642-40820-5_32. hal-01513774 HAL Id: hal-01513774 https://hal.inria.fr/hal-01513774 Submitted on 25 Apr 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Distributed under a Creative Commons Attribution| 4.0 International License Mahasen: Distributed Storage Resource Broker K.D.A.K.S.Perera1, T Kishanthan1, H.A.S.Perera1, D.T.H.V.Madola1, Malaka Walpola1, Srinath Perera2 1 Computer Science and Engineering Department, University Of Moratuwa, Sri Lanka. {shelanrc, kshanth2101, ashansa.perera, hirunimadola, malaka.uom}@gmail.com 2 WSO2 Lanka, No 59, Flower Road, Colombo 07, Sri Lanka [email protected] Abstract. Modern day systems are facing an avalanche of data, and they are being forced to handle more and more data intensive use cases. These data comes in many forms and shapes: Sensors (RFID, Near Field Communication, Weather Sensors), transaction logs, Web, social networks etc.
    [Show full text]