HPC Job Sceduling Co-Scheduling in HPC Clusters

Total Page:16

File Type:pdf, Size:1020Kb

HPC Job Sceduling Co-Scheduling in HPC Clusters HPC Job Sceduling Co-scheduling in HPC Clusters October 26, 2018 Nikolaos Triantafyllis PhD student School of Electrical and Computer Engineering - N.T.U.A. Categories of Schedulers The majority of Schedulers could be categorized in: • Operating Systems Process Schedulers • Cluster Systems Jobs Schedulers • Big Data Schedulers 1 Operating Systems Process Schedulers • During scheduling events, an algorithm has to assign CPU times to tasks • Focus on responsiveness and low overhead • Most notable process schedulers: ◦ Cooperative Scheduling (CS) ◦ Multi-Level Feedback Queue (MLFQ) ◦ O(n) Scheduler ◦ O(1) Scheduler ◦ Completely Fair Scheduler (CFS) ◦ Brain F Scheduler (BFS) • CFS: Each process should have equal share of CPU time. Current Linux kernels use CFS • BFS: Improves interactivity, but lowers performance. Proposed in 2009 2 Cluster Systems Jobs Schedulers • During scheduling events, an algorithm has to assign nodes to jobs • Focus on scalability and high throughput • Most notable jobs schedulers: ◦ Simple Linux Utility for Resource Management (SLURM) ◦ Maui Cluster Scheduler (Maui) ◦ Moab High-Performance Computing Suite (Moab) ◦ Univa Grid Engine (UGE) ◦ LoadLeveler (LL) ◦ Load Sharing Facility (LSF) ◦ Portable Batch System (PBS) [OpenPBS, TORQUE, PBS Pro] ◦ Globus toolkit ◦ GridWay ◦ HTCondor ◦ Mesos ◦ Open MPI ◦ TORQUE ◦ Borg and Omega 3 Big Data Schedulers • During scheduling events, an algorithm has to assign nodes to jobs • Jobs have storage and processing of large and complex data sets • Support specialized frameworks and to a very limited set of problems • Most notable process schedulers: ◦ Dryad ◦ MapRedure ◦ Hadoop ◦ HaLoop ◦ Spark ◦ CIEL “Big Data: New York Stock Exchange produces about 1 TB of new trade data per day.” 4 Cluster Structure Figure 1: Typical resource management system 5 Job Submission example in SLURM § ¤¨ 1 #!/bin/bash 2 # Example with 48 MPI tasks and 24 tasks per node. 3 # 4 # Project/Account (use your own) 5 #SBATCH -A hpc2n-1234-56 6 # 7 # Number of MPI tasks 8 #SBATCH -n 48 9 # 10 # Number of tasks per node 11 #SBATCH --tasks-per-node=24 12 # 13 # Runtime of this jobs is less then 12 hours. 14 #SBATCH --time=12:00:00 15 16 module load openmpi/gcc 17 18 srun ./mpi_program 19 20 # End of submit file ¦ ¥© 6 HPC Resource Management Figure 2: Typical resource management system • Substitute an external scheduler (e.g. Maui) for the internal scheduler to enhance functionality 7 Cluster Systems Jobs Schedulers: Brief Description Simple Linux Utility for Resource Management (SLURM) • Initially developed by Lawrence Livermore National Laboratory (LLNL) • Open source • Supported only by Linux kernel • Is on 6 of the top 10 systems including the number 1 system, Sunway TaihuLight with 10,649,600 computing cores • Is used by 50% of world supercomputers Maui Cluster Scheduler (Maui) • Developed by Adaptive Computing, Inc. • Open source 8 Cluster Systems Jobs Schedulers: Brief Description Moab High-Performance Computing Suite (Moab) • Developed by Adaptive Computing, Inc. • Proprietary • Successor of Maui framework • Additional features • Is used by 40% of the top 10, top 25 and top 100 on the Top500 list Univa Grid Engine (UGE) • Developed by Oracle • Proprietary from 2010 • Also known as Oracle Grid Engine • Supports Job Checkpointing -snapshot of current app state- 9 Cluster Systems Jobs Schedulers: Brief Description LoadLeveler (LL) • Developed by IBM • Proprietary • Supports Job Checkpointing and gang scheduling Load Sharing Facility (LSF) • Developed by IBM • Proprietary • Supports priority escalation -job priority increases in every time interval- 10 Cluster Systems Jobs Schedulers: Brief Description Portable Batch System (PBS) • Developed by NASA • 3 versions: ◦ OpenPBS - Open source, suitable for small clusters ◦ TORQUE - Proprietary, fork of OpenPBS (Adaptive Computing, Inc.) ◦ PBS Professional - Commercial version Globus toolkit • Developed by Globus Alliance • Open source • Set of tools for constructing a computing grid • Communicates with local resource manager (e.g. PBS, UGE, LL) 11 Cluster Systems Jobs Schedulers: Brief Description GridWay • Developed by researches at the University of Madrid • Open source • Was built on top of Globus Toolkit framework • Contains module that detects slowdown (app performance monitoring) and requests job’s migration HTCondor • Developed by University of Wisconsin-Madison • Open source • Before 2012 was known as Condor • Number of tools and frameworks have been built on top of HTCondor, e.g. DAGMan (Directed Acyclic Graph Manager) -apps are nodes and edges dependencies- 12 Cluster Systems Jobs Schedulers: Brief Description Mesos • Developed by Apache Software Foundation • Open source • Mesos master - allocates resources • Fault tolerant (ZooKeeper framework -to elect new master-) Open MPI • Developed by consortium of partners • Open source • Job scheduling by slot or node basis in Round-Robin (RR) 13 Cluster Systems Jobs Schedulers: Brief Description TORQUE - Terascale Open-source Resource and QUEue Manager • Developed by Adaptive Computing • Fork of OpenPBS • Proprietary since June of 2018 • Supports external Job schedulers Borg and Omega • Developed by Google • Proprietary • Borg is a resource manager making a resource offering to scheduler instances • Omega project deploys schedulers that are working in parallel (share state of resources) 14 Typical resource management system Figure 3: SLURM resource management system 15 Typical resource management system Figure 4: Multi-cluster Environment 16 Job Taxonomy regarding flexibility Five types of jobs can be distinguished from the global perspective of the job scheduler: 1. Rigid Jobs - requires fixed set of resources (fixed static resource allocation) 2. Moldable Jobs - allows variable set of resources but must be allocated before it starts (variable static resource allocation) 3. Malleable Jobs - allows variable set of resources which the scheduler dynamically (de)allocates. The scheduler must inform the running job so it can adapt to the new resource allocation (scheduler-initiated and app-executed, variable dynamic resource allocation) 4. Evolving Jobs - reversed of malleable (app-initiated and scheduler-executed, variable dynamic resource allocation) 5. Adaptive Jobs - Combination of malleable and evolving characteristics (app- or scheduler-initiated and scheduler- or app-executed, respectively, variable dynamic resource allocation) 17 Example of Rigid Job Figure 5: A space/time diagram, where the Y-axis represents compute nodes and the X-axis represents time, illustrating a rigid job 18 Example of Moldable Job Figure 6: A space/time diagram, where the Y-axis represents compute nodes and the X-axis represents time, illustrating a moldable job 19 Example of Malleable Job Figure 7: A space/time diagram, where the Y-axis represents compute nodes and the X-axis represents time, illustrating a malleable job 20 Example of Evolving Job Figure 8: A space/time diagram, where the Y-axis represents compute nodes and the X-axis represents time, illustrating a evolving job 21 Example of Adaptive Job Figure 9: A space/time diagram, where the Y-axis represents compute nodes and the X-axis represents time, illustrating a adaptive job 22 HPC & Cloud Application types The majority of HPC workloads alternates phases during their life cycle Figure 10: Types of HPC & Cloud Applications based on system resources (CPU, I/O, network, memory) 23 Scheduling Algorithms • Scheduling algorithms can be broadly divided into two classes; time-sharing and space-sharing • Time-sharing algorithms divide time on a processor into several discrete intervals or slots. These slots are then assigned to unique jobs • Space-sharing algorithms give the requested resources to a single job until the job completes execution. Most cluster schedulers operate in space-sharing mode 24 Time-sharing Algorithms • Local scheduling - threads are placed in a global run queue and executed in RR strategy to the available processors • Gang scheduling - threads run simultaneously on different processors and if two or more of them communicate with each other, they will all be ready to communicate at the same time. If they were not gang-scheduled, then one could wait to send or receive a message to another while it is sleeping, and vice versa. 25 Space-sharing Algorithms • First Come First Served (FCFS) • Round Robin (RR) • Shortest Job First (SJF) / Longest Job First (LJF) (job execution time is provided by user - doubtful) • Smallest Job First (SJF) / Largest Job First (LJF) (job size is provided by user) • Advanced Reservation - the availability of a set of resources is guaranteed at a particular time • Backfilling - optimization allowing shorter jobs to execute while long job at the head of queue is waiting for a free processor • Preemptive Backfilling - backfilling with QoS, where higher priority jobs prevent lower • Fair-Share - equal distribution while a site administrator can set system utilization targets for users, groups, QoS levels (RR strategy at each level of abstraction) 26 HPC Cluster Hierarchy Figure 11: Hierarchical structure of a typical HPC system 27 HPC Co-Allocation Figure 12: Allocation on node level vs allocation on core level 28 Dedicated Resource Allocation • Most parallel scientific apps have frequent communication phases. Any imbalance in computation times result in waiting times, heavily reducing scalability. Easiest way to ensure balanced performance is to assign dedicated resources at the level of node 29 Co-Scheduling • Definition:
Recommended publications
  • Univa Grid Engine 8.0 Superior Workload Management for an Optimized Data Center
    Univa Grid Engine 8.0 Superior Workload Management for an Optimized Data Center Why Univa Grid Engine? Overview Univa Grid Engine 8.0 is our initial Grid Engine is an industry-leading distributed resource release of Grid Engine and is quality management (DRM) system used by hundreds of companies controlled and fully supported by worldwide to build large compute cluster infrastructures for Univa. This release enhances the Grid processing massive volumes of workload. A highly scalable and Engine open source core platform with reliable DRM system, Grid Engine enables companies to produce higher-quality products, reduce advanced features and integrations time to market, and streamline and simplify the computing environment. into enterprise-ready systems as well as cloud management solutions that Univa Grid Engine 8.0 is our initial release of Grid Engine and is quality controlled and fully allow organizations to implement the supported by Univa. This release enhances the Grid Engine open source core platform with most scalable and high performing advanced features and integrations into enterprise-ready systems as well as cloud management distributed computing solution on solutions that allow organizations to implement the most scalable and high performing distributed the market. Univa Grid Engine 8.0 computing solution on the market. marks the start for a product evolution Protect Your Investment delivering critical functionality such Univa Grid Engine 8.0 makes it easy to create a as improved 3rd party application cluster of thousands of machines by harnessing the See why customers are integration, advanced capabilities combined computing power of desktops, servers and choosing Univa Grid Engine for license and policy management clouds in a simple, easy-to-administer environment.
    [Show full text]
  • Push-Based Job Submission Using Reverse SSH Connections
    rvGAHP – Push-Based Job Submission using Reverse SSH Connections Scott Callaghan Gideon Juve Karan Vahi University of Southern California USC Information Sciences Institute USC Information Sciences Institute Los Angeles, California Marina Del Rey, California Marina Del Rey, California [email protected] [email protected] [email protected] Philip J. Maechling Thomas H. Jordan Ewa Deelman University of Southern California University of Southern California USC Information Sciences Institute Los Angeles, California Los Angeles, California Marina Del Rey, California [email protected] [email protected] [email protected] ABSTRACT using Reverse SSH Connections. In WORKS’17: WORKS’17: 12th Workshop Computational science researchers running large-scale scientific on Workflows in Support of Large-Scale Science, November 12–17, 2017, Denver, CO, USA. ACM, New York, NY, USA, 8 pages. https://doi.org/10.1145/3150994. workflow applications often want to run their workflows onthe 3151003 largest available compute systems to improve time to solution. Workflow tools used in distributed, heterogeneous, high perfor- mance computing environments typically rely on either a push- 1 INTRODUCTION based or a pull-based approach for resource provisioning from these Modern scientific applications typically require the execution of compute systems. However, many large clusters have moved to multiple codes, may include computational and data dependencies, two-factor authentication for job submission, making traditional au- and contain varied computational models ranging from a bag of tomated push-based job submission impossible. On the other hand, tasks to a monolithic parallel code. These applications are often pull-based approaches such as pilot jobs may lead to increased com- complex software suites with substantial computational and data plexity and a reduction in node-hour efficiency.
    [Show full text]
  • Introduction to Python University of Oxford Department of Particle Physics
    Particle Physics Cluster Infrastructure Introduction University of Oxford Department of Particle Physics October 2019 Vipul Davda Particle Physics Linux Systems Administrator Room 661 Telephone: x73389 [email protected] Particle Physics Computing Overview 1 Particle Physics Linux Infrastructure Distributed File System gluster NFS /data Worker Nodes /data/atlas /data/lhcb NFS HTCondor /home Batch Server Interactive Servers physics_s/eduroam Network Printer Managed Laptops Managed Desktops Particle Physics Computing Overview 2 Introduction to the Unix Operating System Unix is a Multi-User/Multi-Tasking operating system. Developed in 1969 at AT&T’s Bell Labs by Ken Thompson (Unix) Dennis Ritchie (C) Unix is written in C programming language. Unix was originally a command-line OS, but now has a graphical user interface. It is available in many different forms: Linux , Solaris, AIX, HP-UX, freeBSD It is a well-suited environment for program development: C, C++, Java, Fortran, Python… Unix is mainly used on large servers for scientific applications. Particle Physics Computing Overview 3 Linux Distributions Source: https://www.muylinux.com/2009/04/24/logos-de-distribuciones-gnulinux/ Particle Physics Computing Overview 4 Particle Physics Linux Infrastructure Particle Physics uses CentOS Linux on the cluster. It is a free version of RedHat Enterprise Linux. Particle Physics Computing Overview 5 Basic Linux Commands Particle Physics Computing Overview 6 Basic Linux Commands o ls - list directory contents ls –l - long listing
    [Show full text]
  • The Translational Journey of the Htcondor-CE
    Journal of Computational Science xxx (xxxx) xxx Contents lists available at ScienceDirect Journal of Computational Science journal homepage: www.elsevier.com/locate/jocs Principles, technologies, and time: The translational journey of the HTCondor-CE Brian Bockelman a,*, Miron Livny a,b, Brian Lin b, Francesco Prelz c a Morgridge Institute for Research, Madison, USA b Department of Computer Sciences, University of Wisconsin-Madison, Madison, USA c INFN Milan, Milan, Italy ARTICLE INFO ABSTRACT Keywords: Mechanisms for remote execution of computational tasks enable a distributed system to effectively utilize all Distributed high throughput computing available resources. This ability is essential to attaining the objectives of high availability, system reliability, and High throughput computing graceful degradation and directly contribute to flexibility, adaptability, and incremental growth. As part of a Translational computing national fabric of Distributed High Throughput Computing (dHTC) services, remote execution is a cornerstone of Distributed computing the Open Science Grid (OSG) Compute Federation. Most of the organizations that harness the computing capacity provided by the OSG also deploy HTCondor pools on resources acquired from the OSG. The HTCondor Compute Entrypoint (CE) facilitates the remote acquisition of resources by all organizations. The HTCondor-CE is the product of a most recent translational cycle that is part of a multidecade translational process. The process is rooted in a partnership, between members of the High Energy Physics community and computer scientists, that evolved over three decades and involved testing and evaluation with active users and production infrastructures. Through several translational cycles that involved researchers from different organizations and continents, principles, ideas, frameworks and technologies were translated into a widely adopted software artifact that isresponsible for provisioning of approximately 9 million core hours per day across 170 endpoints.
    [Show full text]
  • Design of an Accounting and Metric-Based Cloud-Shifting
    Design of an Accounting and Metric-based Cloud-shifting and Cloud-seeding framework for Federated Clouds and Bare-metal Environments Gregor von Laszewski1*, Hyungro Lee1, Javier Diaz1, Fugang Wang1, Koji Tanaka1, Shubhada Karavinkoppa1, Geoffrey C. Fox1, Tom Furlani2 1Indiana University 2Center for Computational Research 2719 East 10th Street University at Buffalo Bloomington, IN 47408. U.S.A. 701 Ellicott St [email protected] Buffalo, New York 14203 ABSTRACT and its predecessor meta-computing elevated this goal by not We present the design of a dynamic provisioning system that is only introducing the utilization of multiple queues accessible to able to manage the resources of a federated cloud environment the users, but by establishing virtual organizations that share by focusing on their utilization. With our framework, it is not resources among the organizational users. This includes storage only possible to allocate resources at a particular time to a and compute resources and exposes the functionality that users specific Infrastructure as a Service framework, but also to utilize need as services. Recently, it has been identified that these them as part of a typical HPC environment controlled by batch models are too restrictive, as many researchers and groups tend queuing systems. Through this interplay between virtualized and to develop and deploy their own software stacks on non-virtualized resources, we provide a flexible resource computational resources to build the specific environment management framework that can be adapted based on users' required for their experiments. Cloud computing provides here a demands. The need for such a framework is motivated by real good solution as it introduces a level of abstraction that lets the user data gathered during our operation of FutureGrid (FG).
    [Show full text]
  • A Comprehensive Perspective on Pilot-Job Systems
    A Comprehensive Perspective on Pilot-Job Systems ∗ Matteo Turilli Mark Santcroos Shantenu Jha RADICAL Laboratory, ECE RADICAL Laboratory, ECE RADICAL Laboratory, ECE Rutgers University Rutgers University Rutgers University New Brunswick, NJ, USA New Brunswick, NJ, USA New Brunswick, NJ, USA [email protected] [email protected] [email protected] Pilot-Jobs provide a multi-stage mechanism to execute ABSTRACT workloads. Resources are acquired via a placeholder job and subsequently assigned to workloads. Pilot-Jobs are having Pilot-Job systems play an important role in supporting dis- a high impact on scientific and distributed computing [1]. tributed scientific computing. They are used to consume more They are used to consume more than 700 million CPU hours than 700 million CPU hours a year by the Open Science Grid a year [2] by the Open Science Grid (OSG) [3, 4] communi- communities, and by processing up to 1 million jobs a day for ties, and process up to 1 million jobs a day [5] for the ATLAS the ATLAS experiment on the Worldwide LHC Computing experiment [6] on theLarge Hadron Collider (LHC) [7] Com- Grid. With the increasing importance of task-level paral- puting Grid (WLCG) [8, 9]. A variety of Pilot-Job systems lelism in high-performance computing, Pilot-Job systems are used on distributed computing infrastructures (DCI): are also witnessing an adoption beyond traditional domains. Glidein/GlideinWMS [10, 11], the Coaster System [12], DI- Notwithstanding the growing impact on scientific research, ANE [13], DIRAC [14], PanDA [15], GWPilot [16], Nim- there is no agreement upon a definition of Pilot-Job system rod/G [17], Falkon [18], MyCluster [19] to name a few.
    [Show full text]
  • EGI Federated Platforms Supporting Accelerated Computing
    EGI federated platforms supporting accelerated computing PoS(ISGC2017)020 Paolo Andreetto Jan Astalos INFN, Sezione di Padova Institute of Informatics Slovak Academy of Sciences Via Marzolo 8, 35131 Padova, Italy Bratislava, Slovakia E-mail: [email protected] E-mail: [email protected] Miroslav Dobrucky Andrea Giachetti Institute of Informatics Slovak Academy of Sciences CERM Magnetic Resonance Center Bratislava, Slovakia CIRMMP and University of Florence, Italy E-mail: [email protected] E-mail: [email protected] David Rebatto Antonio Rosato INFN, Sezione di Milano CERM Magnetic Resonance Center Via Celoria 16, 20133 Milano, Italy CIRMMP and University of Florence, Italy E-mail: [email protected] E-mail: [email protected] Viet Tran Marco Verlato1 Institute of Informatics Slovak Academy of Sciences INFN, Sezione di Padova Bratislava, Slovakia Via Marzolo 8, 35131 Padova, Italy E-mail: [email protected] E-mail: [email protected] Lisa Zangrando INFN, Sezione di Padova Via Marzolo 8, 35131 Padova, Italy E-mail: [email protected] While accelerated computing instances providing access to NVIDIATM GPUs are already available since a couple of years in commercial public clouds like Amazon EC2, the EGI Federated Cloud has put in production its first OpenStack-based site providing GPU-equipped instances at the end of 2015. However, many EGI sites which are providing GPUs or MIC coprocessors to enable high performance processing are not directly supported yet in a federated manner by the EGI HTC and Cloud platforms. In fact, to use the accelerator cards capabilities available at resource centre level, users must directly interact with the local provider to get information about the type of resources and software libraries available, and which submission queues must be used to submit accelerated computing workloads.
    [Show full text]
  • Bright Cluster Manager 9.0 User Manual
    Bright Cluster Manager 9.0 User Manual Revision: 406b1b9 Date: Fri Oct 1 2021 ©2020 Bright Computing, Inc. All Rights Reserved. This manual or parts thereof may not be reproduced in any form unless permitted by contract or by written permission of Bright Computing, Inc. Trademarks Linux is a registered trademark of Linus Torvalds. PathScale is a registered trademark of Cray, Inc. Red Hat and all Red Hat-based trademarks are trademarks or registered trademarks of Red Hat, Inc. SUSE is a registered trademark of Novell, Inc. PGI is a registered trademark of NVIDIA Corporation. FLEXlm is a registered trademark of Flexera Software, Inc. PBS Professional, PBS Pro, and Green Provisioning are trademarks of Altair Engineering, Inc. All other trademarks are the property of their respective owners. Rights and Restrictions All statements, specifications, recommendations, and technical information contained herein are current or planned as of the date of publication of this document. They are reliable as of the time of this writing and are presented without warranty of any kind, expressed or implied. Bright Computing, Inc. shall not be liable for technical or editorial errors or omissions which may occur in this document. Bright Computing, Inc. shall not be liable for any damages resulting from the use of this document. Limitation of Liability and Damages Pertaining to Bright Computing, Inc. The Bright Cluster Manager product principally consists of free software that is licensed by the Linux authors free of charge. Bright Computing, Inc. shall have no liability nor will Bright Computing, Inc. provide any warranty for the Bright Cluster Manager to the extent that is permitted by law.
    [Show full text]
  • Experimental Study of Remote Job Submission and Execution on LRM Through Grid Computing Mechanisms
    Experimental Study of Remote Job Submission and Execution on LRM through Grid Computing Mechanisms Harshadkumar B. Prajapati Vipul A. Shah Information Technology Department Instrumentation & Control Engineering Dharmsinh Desai University Department Nadiad, INDIA Dharmsinh Desai University e-mail: [email protected], Nadiad, INDIA [email protected] e-mail: [email protected], [email protected] Abstract —Remote job submission and execution is The fundamental unit of work-done in Grid fundamental requirement of distributed computing done computing is successful execution of a job. A job in using Cluster computing. However, Cluster computing Grid is generally a batch-job, which does not interact limits usage within a single organization. Grid computing with user while it is running. Higher-level Grid environment can allow use of resources for remote job services and applications use job submission and execution that are available in other organizations. This execution facilities that are supported by a Grid paper discusses concepts of batch-job execution using LRM and using Grid. The paper discusses two ways of middleware. Therefore, it is very important to preparing test Grid computing environment that we use understand how a job is submitted, executed, and for experimental testing of concepts. This paper presents monitored in Grid computing environment. experimental testing of remote job submission and Researchers and scientists who are working in higher- execution mechanisms through LRM specific way and level Grid services and applications do not pay much Grid computing ways. Moreover, the paper also discusses attention to the underlying Grid infrastructure in their various problems faced while working with Grid published work.
    [Show full text]
  • Pilot-Job Provisioning Through CREAM Computing Elements on the Worldwide LHC Computing Grid Alexandre Boyer, David Hill, Christophe Haen, Federico Stagni
    Pilot-Job Provisioning through CREAM Computing Elements on the Worldwide LHC Computing Grid Alexandre Boyer, David Hill, Christophe Haen, Federico Stagni To cite this version: Alexandre Boyer, David Hill, Christophe Haen, Federico Stagni. Pilot-Job Provisioning through CREAM Computing Elements on the Worldwide LHC Computing Grid. 34th European Simulation and Modelling Conference (ESM), Oct 2020, Toulouse, France. hal-03191075 HAL Id: hal-03191075 https://hal.archives-ouvertes.fr/hal-03191075 Submitted on 6 Apr 2021 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Pilot-Job Provisioning through CREAM Computing Elements on the Worldwide LHC Computing Grid Alexandre F. Boyer1,2, David R.C. Hill1, Christophe Haen2, Federico Stagni2 {alexandre.franck.boyer, christophe.haen, federico.stagni}@cern.ch, [email protected] 1 Université Clermont Auvergne, ISIMA, CNRS, LIMOS, Clermont-Ferrand, France 2 European Organization for Nuclear Research, Meyrin, Switzerland Abstract The push model, used to manage Computing Grid resources, raises several issues that prevent employing the resources at their full capacity and, thus, limits the Grid throughput. While the push model remains necessary to get access to the resources, this paper aims at addressing some of these issues to make better use of the Worldwide LHC Computing Grid.
    [Show full text]
  • Study of Scheduling Optimization Through the Batch Job Logs Analysis 윤 준 원1 · 송 의 성2* 1한국과학기술정보연구원 슈퍼컴퓨팅본부 2부산교육대학교 컴퓨터교육과
    디지털콘텐츠학회논문지 Journal of Digital Contents Society JDCS Vol. 18, No. 7, pp. 1411-1418, Nov. 2017 배치 작업 로그 분석을 통한 스케줄링 최적화 연구 Study of Scheduling Optimization through the Batch Job Logs Analysis 1 2* 윤 준 원 · 송 의 성 1한국과학기술정보연구원 슈퍼컴퓨팅본부 2부산교육대학교 컴퓨터교육과 JunWeon Yoon1 · Ui-Sung Song2* 1Department of Supercomputing Center, KISTI, Daejeon 34141, Korea 2*Department of Computer Education, Busan National University of Education, Busan 47503, Korea [요 약] 배치 작업 스케줄러는 클러스터 환경에서 구성된 계산 자원을 인지하고 순서에 맞게 효율적으로 작업을 배치하는 역할을 수행 한다. 클러스터내의 한정된 가용자원을 효율적으로 사용하기 위해서는 사용자 작업의 특성을 분석하여 반영하여야 하는데 이를 위해서는 다양한 스케줄링 알고리즘을 파악하고 해당 시스템 환경에 맞게 적용하는 것이 중요하다. 대부분의 스케줄러 소프트웨 어는 전체 관리 대상의 자원 명세와 시스템의 상태뿐만 아니라 작업 제출부터 종료까지 다양한 사용자의 작업 수행 환경을 반영하 게 된다. 또한 작업 수행과 관련한 다양한 정보 가령, 작업 스크립트, 환경변수, 라이브러리, 작업의 대기, 시작, 종료 시간 등을 저 장하게 된다. 본 연구에서는 배치 스케줄러를 통한 작업 수행과 관련된 정보를 통해 사용자의 작업 성공률, 수행시간, 자원 규모 등의 스케줄러의 수행 로그를 분석하여 문제점을 파악하였다. 향후 이 연구를 바탕으로 자원의 활용률을 높임으로써 시스템을 최 적화할 수 있다. [Abstract] The batch job scheduler recognizes the computational resources configured in the cluster environment and plays a role of efficiently arranging the jobs in order. In order to efficiently use the limited available resources in the cluster, it is important to analyze and characterize the characteristics of user tasks. To do this, it is important to identify various scheduling algorithms and apply them to the system environment.
    [Show full text]
  • The NMI Build & Test Laboratory: Continuous Integration Framework
    The NMI Build & Test Laboratory: Continuous Integration Framework for Distributed Computing Software Andrew Pavlo, Peter Couvares, Rebekah Gietzel, Anatoly Karp, Ian D. Alderman, and Miron Livny – University of Wisconsin-Madison Charles Bacon – Argonne National Laboratory ABSTRACT We present a framework for building and testing software in a heterogeneous, multi-user, distributed computing environment. Unlike other systems for automated builds and tests, our framework is not tied to a specific developer tool, revision control system, or testing framework, and allows access to computing resources across administrative boundaries. Users define complex software building procedures for multiple platforms with simple semantics. The system balances the need to continually integrate software changes while still providing on-demand access for developers. Our key contributions in this paper are: (1) the development of design principles for distributed build-and-test systems, (2) a description of an implemented system that satisfies those principles, and (3) case studies on how this system is used in practice at two sites where large, multi-component systems are built and tested. Introduction To build and test any application, users explicitly define the execution workflow of build-and-test proce- Frequently building and testing software yields dures, along with any external software dependencies many benefits [10, 14, 17]. This process, known as and target platforms, using a lightweight declarative continuous integration, allows developers to recognize syntax. The NMI Build & Test software stores this and respond to problems in their applications as they information in a central repository to ensure every build are introduced, rather than be inundated with software or test is reproducible.
    [Show full text]