2082 Supercomputing Frontiers 6Th Asian Conference, SCFA 2020 Singapore, February 24–27, 2020 Proceedings Lecture Notes in Computer Science 12082

Total Page:16

File Type:pdf, Size:1020Kb

2082 Supercomputing Frontiers 6Th Asian Conference, SCFA 2020 Singapore, February 24–27, 2020 Proceedings Lecture Notes in Computer Science 12082 Dhabaleswar K. Panda (Ed.) LNCS 12082 Supercomputing Frontiers 6th Asian Conference, SCFA 2020 Singapore, February 24–27, 2020 Proceedings Lecture Notes in Computer Science 12082 Founding Editors Gerhard Goos Karlsruhe Institute of Technology, Karlsruhe, Germany Juris Hartmanis Cornell University, Ithaca, NY, USA Editorial Board Members Elisa Bertino Purdue University, West Lafayette, IN, USA Wen Gao Peking University, Beijing, China Bernhard Steffen TU Dortmund University, Dortmund, Germany Gerhard Woeginger RWTH Aachen, Aachen, Germany Moti Yung Columbia University, New York, NY, USA More information about this series at http://www.springer.com/series/7407 Dhabaleswar K. Panda (Ed.) Supercomputing Frontiers 6th Asian Conference, SCFA 2020 Singapore, February 24–27, 2020 Proceedings Editor Dhabaleswar K. Panda Department of Computer Science and Engineering The Ohio State University Columbus, OH, USA ISSN 0302-9743 ISSN 1611-3349 (electronic) Lecture Notes in Computer Science ISBN 978-3-030-48841-3 ISBN 978-3-030-48842-0 (eBook) https://doi.org/10.1007/978-3-030-48842-0 LNCS Sublibrary: SL1 – Theoretical Computer Science and General Issues The Editor(s) (if applicable) and The Author(s) 2020. This book is an open access publication. Open Access This book is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this book are included in the book’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Preface As the share of supercomputers in Asia continues to increase, the relevance of supercomputing merits a supercomputing conference for Asia. Supercomputing Asia (SCA 2020) was planned to be an umbrella of notable supercomputing events that promote a vibrant HPC ecosystem in Asian countries. With over 600 speakers, par- ticipants, and exhibitors already pre-registered to attend, SCA20 was on course to be the biggest SCA conference yet. It was planned to be held during February 24–27, 2020, at Suntec Singapore Convention and Exhibition Centre. Unfortunately, the physical conference was canceled due to the COVID-19 pandemic. However, the current proceedings contain the list of papers selected under its technical paper program. The technical program of SCA19 provided a platform for leaders from both aca- demia and industry to interact and to discuss visionary ideas, important global trends, and substantial innovations in supercomputing. SCA19 was attended by over 700 delegates from over 20 different countries. In March 2017, the National Supercom- puting Centre (NSCC) Singapore hosted the Supercomputing Frontiers (SCF 2017). NSCC expanded the scope of SCF by embarking on the first Supercomputing Frontiers Asia (SCFA) technical paper program at SCA 2018. NSCC was established in 2015 and manages Singapore’s first national petascale facility with available HPC resources to support science and engineering computing needs for academic, research, and industry communities. SCFA represents the technical program for SCA 2020, consisting of four tracks: – Application, Algorithms, and Libraries – Architecture, Network/Communications, and Management – Data, Storage, and Visualization – Programming Models and Systems Software The submitted papers for the technical papers program went through a rigorous peer review process by an International Program Committee. A set of eight papers were finally selected for inclusion in these proceedings. The accepted papers cover a range of topics including file systems, memory hierarchy, HPC cloud platform, container image configuration workflow, large-scale applications, and scheduling. I would like thank all authors for their submissions to this conference. My sincere thanks to all Program Committee members for doing high-quality and in-depth reviewing of the submissions and selecting the papers for this year’s program. I would like to thank the conference organizers for giving me an opportunity to serve this year’s conference as the technical papers chair. To the readers, please enjoy these proceedings. April 2020 Dhabaleswar K. (DK) Panda Organization Program Chair Dhabaleswar K. (DK) Panda The Ohio State University, USA Program Committee Ritu Arora Texas Advanced Computing Center, USA Costas Bekas IBM Zurich, Switzerland Hal Finkel Argonne National Laboratory, USA Piotr R. Luszczek University of Tennessee at Knoxville, USA Antonio Pena Barcelona Supercomputing Center, Spain Nathan Tallent Pacific Northwest National Laboratory, USA Pradeep Dubey Intel Corporation, USA Rajkumar Buyya The University of Melbourne, Australia Albert Zomaya The University of Sydney, Australia Kishore Kothapalli IIIT Hyderabad, India Jianfeng Zhan Institute of Computing Technology (ICT), China Amitava Majumdar San Diego Supercomputing Centre, USA Nikhil Jain Google Inc., USA Ron Brightwell Sandia National Laboratories, USA John Kim Korea Advanced Institute of Science and Technology (KISTI), South Korea Ryota Shioya Nagoya University, Japan R. Govindarajan Indian Institute of Science Bangalore, India Quincey Koziol Lawrence Berkeley National Laboratory, USA Amelie Chi Zhou Shenzen University, China Ugo Varetto Pawsey Supercomputing Center, Australia Fang-Pang Lin National Center for High-performance Computing (NCHC), Taiwan Olivier Aumage Inria, France Sunita Chandrasekaran University of Delaware, USA Bilel Hadri King Abdullah University of Science and Technology, Saudi Arabia Hai Jin Huazong University of Science and Technology, China Arthur Maccabe Oak Ridge National Laboratory, USA Naoya Maruyama Lawrence Livermore National Laboratory, USA Ronald Minnich Google Inc., USA Yogesh Simmhan Indian Institute of Science Bangalore, India Sandra Gesing University of Notre Dame, USA viii Organization Michela Taufer University of Tennessee at Knoxville, USA Depei Qian Sun Yat-sen University, China Frances Lee Nanyang Technological University, Singapore Martin Schulz Technical University of Munich, Germany Tin Wee Tan National Supercomputing Centre, Singapore Contents File Systems, Storage and Communication A BeeGFS-Based Caching File System for Data-Intensive Parallel Computing . 3 David Abramson, Chao Jin, Justin Luong, and Jake Carroll Multiple HPC Environments-Aware Container Image Configuration Workflow for Large-Scale All-to-All Protein–Protein Docking Calculations. 23 Kento Aoyama, Hiroki Watanabe, Masahito Ohue, and Yutaka Akiyama DAOS: A Scale-Out High Performance Storage Stack for Storage Class Memory . 40 Zhen Liang, Johann Lombardi, Mohamad Chaarawi, and Michael Hennecke Cloud Platform Optimization for HPC . 55 Aman Verma Applications and Scheduling swGBDT: Efficient Gradient Boosted Decision Tree on Sunway Many-Core Processor . 67 Bohong Yin, Yunchun Li, Ming Dun, Xin You, Hailong Yang, Zhongzhi Luan, and Depei Qian Numerical Simulations of Serrated Propellers to Reduce Noise . 87 Wee-beng Tay, Zhenbo Lu, Sai Sudha Ramesh, and Boo-cheong Khoo High-Performance Computing in Maritime and Offshore Applications . 104 Kie Hian Chua, Harrif Santo, Yuting Jin, Hui Liang, Yun Zhi Law, Gautham R. Ramesh, Lucas Yiew, Yingying Zheng, and Allan Ross Magee Correcting Job Walltime in a Resource-Constrained Environment . 118 Jessi Christa Rubio, Aira Villapando, Christian Matira, and Jeffrey Aborot Author Index ............................................ 139 File Systems, Storage and Communication A BeeGFS-Based Caching File System for Data-Intensive Parallel Computing David Abramson(&) , Chao Jin , Justin Luong , and Jake Carroll The University of Queensland, St Lucia, QLD 4072, Australia {david.abramson,c.jin,justin.luong, jake.carroll}@uq.edu.au Abstract. Modern high-performance computing (HPC) systems are increas- ingly using large amounts of fast storage, such as solid-state drives (SSD), to accelerate disk access times. This approach has been exemplified in the design of “burst buffers”, but more general caching systems have also been built. This paper proposes extending an existing parallel file system to provide such a file caching layer. The solution unifies data access for both the internal
Recommended publications
  • Base Performance for Beegfs with Data Protection
    BeeGFS Data Integrity Improvements without impacting Performance Presenter: Dr. M. K. Jibbe Technical Director , NetApp ESG Presenter: Joey Parnell 1 SW2019 StorageArchitect, Developer Conference. NetApp © 2019 NetApp ESG Inc. NetApp Proprietary All Rights Reserved. Current Problem • Lustre has been a common choice in the HPC market for years but the acquisition of the Lustre intellectual property by storage provider in 2018 has created a challenge for users that want to avoid vendor lock-in. • Parallel file system needs are expanding beyond HPC, but most lack enterprise reliability • Any BeeGFS configuration does Lack Data protection at different locations: • Storage Client and servers do not have cache protection in the event of kernel panics, software crashes or power loss. Potential issues are: • Data cached in the BeeGFS client itself • Data cached at the underlying filesystem layer in the storage server (XFS, ZFS, ext4, etc.) • Data cached by the OS kernel itself underneath the filesystem in the storage server Goal of the Team (Remove Competitor and Expand Market Reach of BeeGFS) Parallel File system Improve solution resiliency to ensure data integrity by ML BeeGFS Swim lane identifying potential component faults that could lead to BeeGFS IB and FC data loss and changing system design to handle the fault Address known Market without any data integrity challenges. Opportunity Simple solution and support 2 2 2019 Storage Developer Conference. © 2019 NetApp Inc. NetApp Proprietary All Rights Reserved. Data Integrity & Availability expectations in HPC (Provide Enterprise Reliability with E-series Storage – BeeGFS) However, . The market we serve goes beyond the HPC scratch file system use cases .
    [Show full text]
  • Spring 2014 • Vol
    Editorial is published two times a year by The GAUSS Centre Editorial for Supercomputing (HLRS, LRZ, JSC) Welcome to this new issue of inside, tive of the German Research Society the journal on Innovative Supercomput- (DFG) start to create interesting re- Publishers ing in Germany published by the Gauss sults as is shown in our project section. Centre for Supercomputing (GCS). In Prof. Dr. A. Bode | Prof. Dr. Dr. Th. Lippert | Prof. Dr.-Ing. Dr. h.c. Dr. h.c. M. M. Resch this issue, there is a clear focus on With respect to hardware GCS is con- Editor applications. While the race for ever tinuously following its roadmap. New faster systems is accelerated by the systems will become available at HLRS F. Rainer Klank, HLRS [email protected] challenge to build an Exascale system in 2014 and at LRZ in 2015. System within a reasonable power budget, the shipment for HLRS is planned to start Design increased sustained performance allows on August 8 with a chance to start Linda Weinmann, HLRS [email protected] for the solution of ever more complex general operation as early as October Sebastian Zeeden, HLRS [email protected] problems. The range of applications 2014. We will report on further Pia Heusel, HLRS [email protected] presented is impressive covering a progress. variety of different fields. As usual, this issue includes informa- GCS has been a strong player in PRACE tion about events in supercomputing over the last years. In 2015 will con- in Germany over the last months and tinue to provide access to leading edge gives an outlook of workshops in the systems for all European researchers field.
    [Show full text]
  • Slurm Overview and Elasticsearch Plugin
    Slurm Workload Manager Overview SC15 Alejandro Sanchez [email protected] Copyright 2015 SchedMD LLC http://www.schedmd.com Slurm Workload Manager Overview ● Originally intended as simple resource manager, but has evolved into sophisticated batch scheduler ● Able to satisfy scheduling requirements for major computer centers with use of optional plugins ● No single point of failure, backup daemons, fault-tolerant job options ● Highly scalable (3.1M core Tianhe-2 at NUDT) ● Highly portable (autoconf, extensive plugins for various environments) ● Open source (GPL v2) ● Operating on many of the world's largest computers ● About 500,000 lines of code today (plus test suite and documentation) Copyright 2015 SchedMD LLC http://www.schedmd.com Enterprise Architecture Copyright 2015 SchedMD LLC http://www.schedmd.com Architecture ● Kernel with core functions plus about 100 plugins to support various architectures and features ● Easily configured using building-block approach ● Easy to enhance for new architectures or features, typically just by adding new plugins Copyright 2015 SchedMD LLC http://www.schedmd.com Elasticsearch Plugin Copyright 2015 SchedMD LLC http://www.schedmd.com Scheduling Capabilities ● Fair-share scheduling with hierarchical bank accounts ● Preemptive and gang scheduling (time-slicing parallel jobs) ● Integrated with database for accounting and configuration ● Resource allocations optimized for topology ● Advanced resource reservations ● Manages resources across an enterprise Copyright 2015 SchedMD LLC http://www.schedmd.com
    [Show full text]
  • Data Integration and Analysis 10 Distributed Data Storage
    1 SCIENCE PASSION TECHNOLOGY Data Integration and Analysis 10 Distributed Data Storage Matthias Boehm Graz University of Technology, Austria Computer Science and Biomedical Engineering Institute of Interactive Systems and Data Science BMK endowed chair for Data Management Last update: Dec 11, 2020 2 Announcements/Org . #1 Video Recording . Link in TeachCenter & TUbe (lectures will be public) . Optional attendance (independent of COVID) . #2 COVID‐19 Restrictions (HS i5) . Corona Traffic Light: RED ORANGE (Dec 07) . Webex lectures revaluated over holidays . #3 Projects and Exercises . 34x SystemDS projects, 11x exercise projects . Final kickoff discussions last week . #4 Course Evaluation . Evaluation period: Dec 15 – Jan 31 . Exam date: TBD, but likely end of January 706.520 Data Integration and Large‐Scale Analysis – 10 Distributed Data Storage Matthias Boehm, Graz University of Technology, WS 2020/21 3 Course Outline Part B: Large‐Scale Data Management and Analysis 12 Distributed Stream 13 Distributed Machine Processing [Jan 15] Learning Systems [Jan 22] 11 Distributed Data‐Parallel Computation [Jan 08] Compute/ Storage 10 Distributed Data Storage [Dec 11] 09 Cloud Resource Management and Scheduling [Dec 04] Infra 08 Cloud Computing Fundamentals [Nov 27] 706.520 Data Integration and Large‐Scale Analysis – 10 Distributed Data Storage Matthias Boehm, Graz University of Technology, WS 2020/21 4 Agenda . Motivation and Terminology . Object Stores and Distributed File Systems . Key‐Value Stores and Cloud DBMS 706.520 Data Integration and Large‐Scale Analysis – 10 Distributed Data Storage Matthias Boehm, Graz University of Technology, WS 2020/21 5 Motivation and Terminology 706.520 Data Integration and Large‐Scale Analysis – 10 Distributed Data Storage Matthias Boehm, Graz University of Technology, WS 2020/21 Motivation and Terminology 6 Overview Distributed Data Storage Global .
    [Show full text]
  • Globalfs: a Strongly Consistent Multi-Site File System
    GlobalFS: A Strongly Consistent Multi-Site File System Leandro Pacheco Raluca Halalai Valerio Schiavoni University of Lugano University of Neuchatelˆ University of Neuchatelˆ Fernando Pedone Etienne Riviere` Pascal Felber University of Lugano University of Neuchatelˆ University of Neuchatelˆ Abstract consistency, availability, and tolerance to partitions. Our goal is to ensure strongly consistent file system operations This paper introduces GlobalFS, a POSIX-compliant despite node failures, at the price of possibly reduced geographically distributed file system. GlobalFS builds availability in the event of a network partition. Weak on two fundamental building blocks, an atomic multicast consistency is suitable for domain-specific applications group communication abstraction and multiple instances of where programmers can anticipate and provide resolution a single-site data store. We define four execution modes and methods for conflicts, or work with last-writer-wins show how all file system operations can be implemented resolution methods. Our rationale is that for general-purpose with these modes while ensuring strong consistency and services such as a file system, strong consistency is more tolerating failures. We describe the GlobalFS prototype in appropriate as it is both more intuitive for the users and detail and report on an extensive performance assessment. does not require human intervention in case of conflicts. We have deployed GlobalFS across all EC2 regions and Strong consistency requires ordering commands across show that the system scales geographically, providing replicas, which needs coordination among nodes at performance comparable to other state-of-the-art distributed geographically distributed sites (i.e., regions). Designing file systems for local commands and allowing for strongly strongly consistent distributed systems that provide good consistent operations over the whole system.
    [Show full text]
  • Towards Transparent Data Access with Context Awareness
    Computer Science • 19(2) 2018 https://doi.org/10.7494/csci.2018.19.2.2844 Micha lWrzeszcz Renata G. S lota Jacek Kitowski TOWARDS TRANSPARENT DATA ACCESS WITH CONTEXT AWARENESS Abstract Open-data research is an important factor accelerating the production and analysis of scientific results as well as worldwide collaboration; still, very little dataEarly is being shared at scale. The aim of this bird article is to analyze existing data-access solutions along with their usage limitations. After analyzing the existing solutions and data-access stakeholder needs, the authors propose their own vision of a data-access model. Keywords distributed, transparent data access, distributed data storage, context awareness Citation Computer Science 19(2) 2018: 201{221 201 202 Micha Wrzeszcz, Jacek Kitowski, Renata S lota 1. Introduction Data access and management can be analyzed on several levels; from personal data to globally distributed shared data with different problems to be overcome by data access and management tools. The simplest case is access to local data; i.e., data stored on direct attached storage (DAS), where the focus is on providing the device drivers and solutions that use the hardware optimally. The next level is to provide data access for a group of users working for a single organization; e.g., provision of network attached storage (NAS). The problems en- countered in this case pertain to possible network failures, higher latencies [36], or the simultaneous work of many users influencing the quality of service [48, 59]. The balance of use of available storage systems to maintain QoS and cost effectiveness in an organization while request scheduling and resource allocation is also expected [28,56].
    [Show full text]
  • The Leading Parallel Cluster File System, Developed with a St Rong Focus on Perf Orm Ance and Designed for Very Ea Sy Inst All Ati on and Management
    The Lead ing Para llel Clus ter File Syste m ww w.thinkpa rq.com ww w.beegfs.io UT BEEGFS ABO UT BEEGFS Wh at is Bee GFS System Archit ectu re BeeGFS (formerly FhGFS) is the leading parallel cluster file system, developed with a st rong focus on perf orm ance and designed for very ea sy inst all ati on and management. If I/O intensive workloads a re your p roblem, BeeGFS is the solution. Wh y use BeeGFS BeeGFS tr ansparentl y sp reads user data ac ross multiple servers. By inc reasing the number of servers and disks in the system, you can simply scale perf ormance and capacity of the file system to the level that you need, seamlessly f rom small clusters up to enterprise-class systems with thousands of nodes. Get The Most Out Of Your Data The flexibility, robustness, and outstanding performance of BeeGFS help our customers a round the globe to increa se prod uctivit y by delivering results faster and by enabli ng new data anal ysis met hod s that we re not possible without the advantages of BeeGFS. ASP ECTS KEY ASP ECTS Ma xim um Scala bili ty Ma xim um Flexibi lit y BeeGFS offers maximum performance and scalability on various BeeGFS supports a wide range of Linu x dist ribu ti on s such levels. It supports dist ribu te d fi le con te nts with flexible as RHEL/Fedora, SLES/OpenSuse or Debian/Ubuntu as well as a striping ac ross storage servers on a pe r-file or pe r-di rectory basis wide range of Linux ke rnels f rom ancient 2.6.18 up to the as well as dist ribu te d me tadat a.
    [Show full text]
  • Maximizing the Performance of Scientific Data Transfer By
    Maximizing the Performance of Scientific Data Transfer by Optimizing the Interface Between Parallel File Systems and Advanced Research Networks Nicholas Millsa,∗, F. Alex Feltusb, Walter B. Ligon IIIa aHolcombe Department of Electrical and Computer Engineering, Clemson University, Clemson, SC bDepartment of Genetics and Biochemistry, Clemson University, Clemson, SC Abstract The large amount of time spent transferring experimental data in fields such as genomics is hampering the ability of scientists to generate new knowledge. Often, computer hardware is capable of faster transfers but sub-optimal transfer software and configurations are limiting performance. This work seeks to serve as a guide to identifying the optimal configuration for performing genomics data transfers. A wide variety of tests narrow in on the optimal data transfer parameters for parallel data streaming across Internet2 and between two CloudLab clusters loading real genomics data onto a parallel file system. The best throughput was found to occur with a configuration using GridFTP with at least 5 parallel TCP streams with a 16 MiB TCP socket buffer size to transfer to/from 4{8 BeeGFS parallel file system nodes connected by InfiniBand. Keywords: Software Defined Networking, High Throughput Computing, DNA sequence, Parallel Data Transfer, Parallel File System, Data Intensive Science 1. Introduction of over 2,500 human genomes from 26 populations have been determined (The 1000 Genomes Project [3]); genome Solving scientific problems on a high-performance com- sequences have been produced for 3,000 rice varieties from puting (HPC) cluster will happen faster by taking full ad- 89 countries (The 3000 Rice Genomes Project [4]). These vantage of specialized infrastructure such as parallel file raw datasets are but a few examples that aggregate into systems and advanced software-defined networks.
    [Show full text]
  • Understanding and Finding Crash-Consistency Bugs in Parallel File Systems Jinghan Sun, Chen Wang, Jian Huang, and Marc Snir University of Illinois at Urbana-Champaign
    Understanding and Finding Crash-Consistency Bugs in Parallel File Systems Jinghan Sun, Chen Wang, Jian Huang, and Marc Snir University of Illinois at Urbana-Champaign Abstract directly applied to PFSes, due to the unique architecture of Parallel file systems (PFSes) and parallel I/O libraries have PFS, its workload characteristics, and the increased complex- been the backbone of high-performance computing (HPC) ity of HPC I/O stack. Specifically, files can be shared by all infrastructures for decades. However, their crash consistency processes in a parallel job, and I/O is performed through a bugs have not been extensively studied, and the corresponding variety of parallel I/O libraries which may have their own bug-finding or testing tools are lacking. In this paper, we first crash consistency implementation. A proper attribution of conduct a thorough bug study on the popular PFSes, such as application-level bugs to one layer or another in the I/O stack BeeGFS and OrangeFS, with a cross-stack approach that cov- depends on the "contract" between each layer. ers HPC I/O library, PFS, and interactions with local file sys- To further understand crash consistency bugs in parallel I/O, tems. The study results drive our design of a scalable testing we conduct a study with popular PFSes, including BeeGFS framework, named PFSCHECK. PFSCHECK is easy to use and OrangeFS. We manually create seven typical I/O work- with low performance overhead, as it can automatically gener- loads with the parallel I/O library HDF5 [5], and investigate ate test cases for triggering potential crash-consistency bugs, crash-consistency bugs across the I/O stack.
    [Show full text]
  • Using On-Demand File Systems in HPC Environments
    Using On-Demand File Systems in HPC Environments Mehmet Soysal, Marco Berghoff, Thorsten Zirwes, Marc-André Vef, Sebastian Oeste, André Brinkmann, Wolfgang E. Nagel, and Achim Streit Steinbuch Centre for Computing (SCC) / Scientific Computing and Simulation (SCS) KIT – The Research University in the Helmholtz Association www.kit.edu Overview Motivation Approach Related Work Use Cases and Results Remarks and Observations Conclusion & Future work 2/18 Wed, July 17, 2019 Mehmet Soysal – Using On-Demand File Systems in HPC Environments Steinbuch Centre for Computing (SCC) Motivation The I/O Subsystem (parallel FS) is a bottleneck in HPC Systems Bandwidth, metadata or latency Shared medium Applications interfere with each other New storage technologies (SSD, NVMe, NVRAM) 3/18 Wed, July 17, 2019 Mehmet Soysal – Using On-Demand File Systems in HPC Environments Steinbuch Centre for Computing (SCC) Motivation Proposed Solution Bring data closer to compute nodes On-demand file system (ODFS) node-local storage Tailor private ODFS Advantages Dedicated bandwidth / IOPS Independent to global file system Low latency due to SSD / NVMe / NVRAM No code changes needed to application 4/18 Wed, July 17, 2019 Mehmet Soysal – Using On-Demand File Systems in HPC Environments Steinbuch Centre for Computing (SCC) HPC: current file system usage App/Job 1 App/Job 2 Node 1 Node 2 Node 3 Node 4 Node 5 Node 6 Global Parallel File System $HOME / $WORK / Scratch 5/18 Wed, July 17, 2019 Mehmet Soysal – Using On-Demand File Systems in HPC Environments Steinbuch Centre for
    [Show full text]
  • Performance and Scalability of a Sensor Data Storage Framework
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Aaltodoc Publication Archive Aalto University School of Science Degree Programme in Computer Science and Engineering Paul Tötterman Performance and Scalability of a Sensor Data Storage Framework Master’s Thesis Helsinki, March 10, 2015 Supervisor: Associate Professor Keijo Heljanko Advisor: Lasse Rasinen M.Sc. (Tech.) Aalto University School of Science ABSTRACT OF Degree Programme in Computer Science and Engineering MASTER’S THESIS Author: Paul Tötterman Title: Performance and Scalability of a Sensor Data Storage Framework Date: March 10, 2015 Pages: 67 Major: Software Technology Code: T-110 Supervisor: Associate Professor Keijo Heljanko Advisor: Lasse Rasinen M.Sc. (Tech.) Modern artificial intelligence and machine learning applications build on analysis and training using large datasets. New research and development does not always start with existing big datasets, but accumulate data over time. The same storage solution does not necessarily cover the scale during the lifetime of the research, especially if scaling up from using common workgroup storage technologies. The storage infrastructure at ZenRobotics has grown using standard workgroup technologies. The current approach is starting to show its limits, while the storage growth is predicted to continue and accelerate. Successful capacity planning and expansion requires a better understanding of the patterns of the use of storage and its growth. We have examined the current storage architecture and stored data from different perspectives in order to gain a better understanding of the situation. By performing a number of experiments we determine key properties of the employed technologies. The combination of these factors allows us to make informed decisions about future storage solutions.
    [Show full text]
  • Intel® SSF Reference Design: Intel® Xeon Phi™ Processor, Intel® OPA
    Intel® Scalable System Framework Reference Design Intel® Scalable System Framework (Intel® SSF) Reference Design Cluster installation for systems based on Intel® Xeon® processor E5-2600 v4 family including Intel® Ethernet. Based on OpenHPC v1.1 Version 1.0 Intel® Scalable System Framework Reference Design Summary This Reference Design is part of the Intel® Scalable System Framework series of reference collateral. The Reference Design is a verified implementation example of a given reference architecture, complete with hardware and software Bill of Materials information and cluster configuration instructions. It can confidently be used “as is”, or be the foundation for enhancements and/or modifications. Additional Reference Designs are expected in the future to provide example solutions for existing reference architecture definitions and for utilizing additional Intel® SSF elements. Similarly, more Reference Designs are expected as new reference architecture definitions are introduced. This Reference Design is developed in support of the specification listed below using certain Intel® SSF elements: • Intel® Scalable System Framework Architecture Specification • Servers with Intel® Xeon® processor E5-2600 v4 family processors • Intel® Ethernet • Software stack based on OpenHPC v1.1 2 Version 1.0 Legal Notices No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document. Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade. This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice.
    [Show full text]