A Mathematical Programming- and Simulation-Based Framework to Evaluate Cyberinfrastructure Design Choices

Total Page:16

File Type:pdf, Size:1020Kb

A Mathematical Programming- and Simulation-Based Framework to Evaluate Cyberinfrastructure Design Choices A mathematical programming- and simulation-based framework to evaluate cyberinfrastructure design choices Zhengchun Liu∗, Rajkumar Kettimuthu∗, Sven Leyffer∗, Prashant Palkary, and Ian Foster∗z ∗ Mathematics and Computer Science Division, Argonne National Laboratory, 9700 Cass Ave., Lemont, IL 60439, USA fzhengchun.liu, kettimut, leyffer, [email protected] y Industrial Engineering and Operations Research, Indian Institute of Technology Bombay, Mumbai, MH, India 400076 [email protected] z Department of Computer Science, University of Chicago, Chicago, IL 60637, USA Abstract—Modern scientific experimental facilities such as x- At experimental science facilities such as Argonne’s Ad- ray light sources increasingly require on-demand access to large- vanced Photon Source (APS), data collected at beamline scale computing for data analysis, for example to detect experi- experiments are typically processed either locally on work- mental errors or to select the next experiment. As the number of such facilities, the number of instruments at each facility, and the stations or at a central facility cluster—portions of which may scale of computational demands all grow, the question arises as to be dedicated to specific beamlines. Inevitably, demand often how to meet these demands most efficiently and cost-effectively. outstrips supply, slowing analysis and/or discouraging the use A single computer per instrument is unlikely to be cost-effective of advanced computational methods. In this study, our goal is because of low utilization and high operating costs. A single to ensure that compute resources are available on-demand to national compute facility, on the other hand, introduces a single point of failure and perhaps excessive communication costs. We provide rapid turnaround time, which is crucial to ensure that introduce here methods for evaluating these and other potential data collected are useful and usable. With current state-of-the- design points, such as per-facility computer systems and a art detectors, many beamlines already struggle to derive useful distributed multisite “superfacility.” We use the U.S. Department feedback rapidly with the locally available dedicated compute of Energy light sources as a use case and build a mixed-integer resources. Data rates are projected to increase considerably at programming model and a customizable superfacility simulator to enable joint optimization of design choices and associated many facilities in the next few years. Therefore, the ability operational decisions. The methodology and tools provide new to use high-performance computing (HPC) will be critical for insights into design choices for on-demand computing facilities these facilities. for real-time analysis of scientific experiment data. The simulator Real-time analysis is challenging because of the vast can also be used to support facility operations, for example by amounts of generated data that must be analyzed in short simulating the impact of events such as outages. amounts of time. Detector technology is improving at rates far greater than Moore’s law, and modern detectors can I. INTRODUCTION generate data at multiple gigabytes per second. The com- New experimental modalities can generate extremely large putational power required to extract useful information from quantities of data. The ability to make effective use of these these streaming datasets in real time almost always exceeds new capabilities depends on the linking of data generation the resources available at a beamline. Thus the use of remote with computation, for example to filter out important results, computing facilities for analysis is no longer a luxury but a identify experimental misconfigurations, or select the next necessity. experiment. X-ray light sources—crucial tools for addressing The use of remote computing facilities for real-time analysis grand challenge problems in the life sciences, energy, climate requires effective and reliable methods for on-demand acquisi- change, and information technology [1–3]—illustrate these tion of compute and network resources, efficient data stream- challenges well. A typical light source facility hosts dozens ing from beamline to compute facility, and analysis at data of beamlines, each with multiple instruments. Scientists who generation rates so that timely decisions can be taken. These run experiments at beamlines want to collect the maximum tasks are complicated by the potential for competing load from information possible about the sample under study in as little different experiments. Researchers have performed a variety time as possible. The ability to analyze data produced by of preliminary studies on these problems, including methods detectors in near-real time can enable optimized data collection for online analysis of data from microtomography [4, 5], high schemes, on-the-fly adjustments to experimental parameters, energy diffraction microscopy [6, 7], diffuse scattering [8], early detection of and response to errors (saving both beam continuous motion scan ptychography [9], and linking of time and scientist time), and ultimately improved productiv- ultrafast three-dimensional x-ray imaging and continuum sim- ity [4–10]. ulation [10]. These studies illustrate that users of light source facilities can understand their samples sufficiently during beam III. THE FRAMEWORK time experiments to adjust the experiment in order to increase When designing a cyberinfrastructure that caters to the on- scientific output—if enough CPU and network resources can demand requirements of one or more science communities or be marshaled on demand. But it is far from clear how one projects, several factors should be considered. One key factor should design a cyberinfrastructure that would allow these sci- is flexibility. For example, an experiment schedule that has ence workflows to marshal enough CPU and network resources some flexibility can be adjusted to minimize the peak demand. on demand. Thus, our framework has a demand optimization component In this work, we develop a framework to evaluate design that uses the flexibility in the demand to optimize the demand. choices for such a cyberinfrastructure. Our framework consists Finding an optimal design that meets the user requirements of a mathematical model, an optimization engine, and a dis- with the least budget would be ideal. Theoretically, one could crete event simulator. It allows the users to identify optimal de- use mathematical programming for this purpose, but this is an sign choices for a variety of objective functions and constraints NP-complete problem. Mathematical programming also has (using the mathematical optimization for a smaller problem other restrictions such as a simplified data transfer model space) and to evaluate how well they will work in practice and ideal scheduling based on a global vision of jobs. Thus, (using simulation for a larger problem space). The objective we include in our framework both mathematical program- function could be a combination of constraints (e.g., maximum ming and a discrete event simulator, so that mathematical processing delay, minimum resource utilization) and scientific programming can be applied on a subset of the problem and output (e.g., in terms of timeliness of data processing), system the top few most-optimal solutions can be evaluated on the reliability, and system cost. Our framework can be used to original (bigger) problem by using simulation (that is closer explore interesting tradeoffs. to reality) to identify a design that is most promising for a real-world environment. Figure 1 shows the architecture of II. MOTIVATION our framework. A. Mathematical problem description Various groups have demonstrated the connection of ex- perimental facilities with remote supercomputers using high- We are interested in formulating a mixed-integer program- speed network connections such as ESnet to process data ming (MIP) model that allows us to schedule jobs from from experimental facilities in near-real time [5, 7, 8, 11]. scientific facilities at HPC resources, taking into account the These demonstrations typically involved special arrangements network capacities and the resource capacities while ensuring with compute facility and network operators to obtain the that slowdown for a given percentile of the jobs is below a compute and network resources on demand. But such special certain threshold. The slowdown for a job is defined as the arrangements are not efficient or scalable. Moreover, the batch ratio of the time length between the start and finish of a job queue systems at supercomputers cannot meet the requirement and the run time of the job. of these experimental science workflows, which often require 1) Definition of Sets: We use calligraphic letters to describe compute resources within minutes or even seconds of data the sets in our mathematical model. becoming available. Sites; S; is the set of scientific facilities/sites that create New approaches are needed to provide the compute re- compute jobs. sources required by experimental facilities. For example, we Resources; R; is the set of HPC resources where the jobs are may build resources dedicated for a class of experimental processed. science workflows (e.g., light source workflows) and schedule JobIds; I; is a set of unique job identifiers (we use integers). experiments to maximize their utilization. Or, we may add Jobs; J ; is the set of jobs, where J ⊆ I × S. resources to the existing supercomputers, change scheduling Time; T ; is the set of (discretized) time intervals
Recommended publications
  • Microsoft Office Sharepoint Server Custom Application Development: Document Workflow Management Project
    Microsoft Office SharePoint Server Custom Application Development: Document Workflow Management Project Author: Eric Charran, Microsoft Corporation Date published: February 2009 Applies to: Microsoft® Office SharePoint® Server 2007 VSTS Rangers: This content was created in a VSTS Rangers project. “Our mission is to accelerate the adoption of Microsoft® Visual Studio® Team System by delivering out-of-band solutions for missing features or guidance. We work closely with members of Microsoft Services and MVP communities to make sure that their solutions address real-world blockers.” Bijan Javidi, VSTS Ranger Lead Summary: This white paper reviews the successful Microsoft Consulting Services (MCS) design, construction, and deployment of a custom Microsoft® Office SharePoint® Server 2007 application to a mid-market enterprise customer. This documentation will serve to educate and familiarize customers who are planning to undertake similar custom Office SharePoint Server 2007 application projects within their own organizations. The guidance and experiences come directly from a real-world field implementation and contain patterns and practices in addition to descriptions of the problem, business challenge, the solution's technical structure, and the staffing model that was used to implement the application. The audience for this document includes customers who are interested in conducting a software development life cycle for a custom application that will be built on the Microsoft SharePoint Products and Technologies platform. This document not only discusses the project's goals and vision from inception, but also follows the project through staffing by identifying successful team models. The solution's physical design and best practice considerations are documented and provide background and context for the presented lessons learned.
    [Show full text]
  • Workflow Definition, Benefits, and Preparation
    4 Country View Road Malvern, PA 19355 800.223.7036 610.647.5930 S www.sct.com Workflow definition, benefits, and preparation An SCT White Paper EWHT-005 (02/03) SCT, Banner, PowerCAMPUS, and the circle P logo are registered trademarks; and the SCT logo, SCT Plus, SCT Matrix, SCT Campus Pipeline, and Luminis are trademarks, of Systems & Computer Technology Corporation or one or more of its wholly owned subsidiaries. All other products and company names referenced herein are used for identification purposes only and may be trademarks of their respective owners. Copyright © 2003 Systems & Computer Technology Corporation. All rights reserved. Workflow (cont.) What is Workflow? Most colleges and universities today practice workflow of some kind, either manually or with software applications. Admitting a student, for example, requires more than one person (and more than one department) to find the prospect, review the application, compile the financial aid package, and send the offer letter. The process of passing information along and responding to it is a type of workflow. However, more often than not, the process grinds to a halt when one of those papers languishes in a participant’s in-box. A software-based workflow management system would speed this process—and hundreds of others like it—and elevate its quality by automating, simplifying, measuring, directing, and managing the flow of information from department to department across the enterprise. There are many different definitions of workflow. For consistency, SCT follows the definition put forth by the respected Workflow Management Coalition (WfMC): The coalition also states that a true workflow management system must have the following criteria: • Define, create, and manage the execution of workflows with software • Run on one or more workflow engines • Interpret the process definition • Interact with workflow participants • Invoke IT tools and applications where required In addition, workflow solutions comprise data, people, tools, and activities.
    [Show full text]
  • Introduction to Workflows April 6, 2020
    Tutorial An Introduction to Workflows April 6, 2020 Sample to Insight QIAGEN Aarhus Silkeborgvej 2 Prismet 8000 Aarhus C Denmark digitalinsights.qiagen.com [email protected] An Introduction to Workflows 2 Tutorial An Introduction to Workflows A workflow consists of a series of connected tools where the output of one tool is used as input for another tool, making it possible to analyze many samples using a standardized pipeline. This tutorial covers: • Setting up a workflow that maps reads to a reference sequence, detects variants in the data relative to the reference, and filters away variants that are also present in a database of common variants • Launching a workflow in batch mode to analyze multiple samples • Workflow management, including creating a workflow installer, for installing the workflow on your own or another CLC Workbench, or on a CLC Server, where all or a selected group of server users could make use of it Steps where action should be taken are numbered. Prerequisites: We recommend using CLC Genomics Workbench 20.0 or higher when working through this tutorial. Some functionality referred to is not available in earlier versions. Download and import data for the tutorial The example data for this tutorial includes two files of sequencing reads and several reference tracks: the human mitochondrial genome from the hg18 build, NC_0O1807 (Genome), corre- sponding CDS and Gene tracks, and a chrMdbSNPCommon track containing the dbSNP common variants for this mitochondrial sequence. 1. Download the example data from http://resources.qiagenbioinformatics.com/ testdata/chrM-tutorial-data.zip. 2. Start the CLC Genomics Workbench.
    [Show full text]
  • A Adapting Scientific Workflow Structures Using Multi-Objective
    A Adapting Scientific Workflow Structures Using Multi-Objective Optimisation Strategies IRFAN HABIB, Centre for Complex Cooperative Systems, FET, University of the West of England, Bristol, UK ASHIQ ANJUM, School of Computing and Mathematics, University of Derby, Derby, UK RICHARD MCCLATCHEY, Centre for Complex Cooperative Systems, FET, University of the West of England, Bristol, UK OMER RANA, School of Computer Science & Informatics, Cardiff University, UK Scientific workflows have become the primary mechanism for conducting analyses on distributed comput- ing infrastructures such as grids and clouds. In recent years, the focus of optimisation within scientific workflows has primarily been on computational tasks and workflow makespan. However, as workflow-based analysis becomes ever more data intensive, data optimisation is becoming a prime concern. Moreover, scien- tific workflows can scale along several dimensions: (i) number of computational tasks, (ii) heterogeneity of computational resources, and the (iii) size and type (static vs. streamed) of data involved. Adapting workflow structure in response to these scalability challenges remains an important research objective. Understand- ing how a workflow graph can be restructured in an automated manner (through task merge for instance), to address constraints of a particular execution environment is explored in this work, using a multi-objective evolutionary approach. Our approach attempts to adapt the workflow structure to achieve both compute and data optimisation. The question of when to terminate the evolutionary search in order to conserve compu- tations is tackled with a novel termination criterion. The results presented in this paper, demonstrate the feasibility of the termination criterion and demonstrate that significant optimisation can be achieved with a multi-objective approach.
    [Show full text]
  • How to Describe Workflow Information Systems to Support Business Process
    How to Describe Workflow Information Systems to Support Business Process Josefina Guerrero García, Jean Vanderdonckt, Christophe Lemaige, Juan M. González Calleros 1Université catholique de Louvain, Louvain School of Management Place des Doyens, 1 – B-1348 Louvain-la-Neuve (Belgium) Abstract ticipant to another for action, according to a set of pro- cedural rules” [26]. Workflow Information Systems (WIS) cover the application of information technology This paper addresses a methodology for developing to business problems. Its primary characteristic is the the various user interfaces (UI) of a workflow informa- automation of processes involving combinations of tion system (WIS), which are advocated to automate human activities with information technology applica- business processes, following a model-centric ap- tions. Owing to the fact that the users of a IS interact proach based on the requirements and processes of the with it through its user interfaces (UIs) in the pursuit organization. The methodology applies to: 1) integrate of organizational goals, flexibility in creating them is human and machines based activities, in particular therefore important. We will explore a systematic way those involving interaction with IT applications and to define UIs for a WIS. The workflow model defines tools, 2) to identify how tasks are structured, who per- what processes and tasks need to be fulfilled and their form them, what their relative order is, how they are possible ordering; hence the workflow model is a offered or assigned, and how tasks are being tracked. ‘framework’ for creating task model, hence the task For this purpose, workflow is recursively decomposed model is suitable for developing UIs.
    [Show full text]
  • A Novel Approach for Bioinformatics Workflow Discovery
    (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 5, No. 11, 2014 A Novel Approach for Bioinformatics Workflow Discovery Walaa Nagy Hoda M. O. Mokhtar Information Systems Dept. Information Systems Dept. Faculty of Computers and Information Faculty of Computers and Information Cairo University. Cairo University. Egypt Egypt Abstract—Workflow systems are typical fit for in the explo- more closely to the objective of the research, provides further rative research of bioinformaticians. These systems can help abstraction and reduces the workflow conceptually to a single bioinformaticians to design and run their experiments and to functional operation as follows: automatically capture and store the data generated at runtime. On the other hand, Web services are increasingly used as the construct phylogenetic tree to a reference sequence. preferred method for accessing and processing the information This statement ”Construct phylogenetic tree” subsume the coming from the diverse life science sources. In this work we steps required to generate phylogenetic tree [1] which are: provide an efficient approach for creating bioinformatic workflow Step 1: Choosing an appropriate markers for the phylogenetic for all-service architecture systems (i.e., all system components analysis. are services ). This architecture style simplifies the user inter- action with workflow systems and facilitates both the change of Step 2:Performing multiple sequence alignments. individual components, and the addition of new components to Step 3:Selection of an evolutionary model. adopt to other workflow tasks if required. We finally present Step 4:Phylogenetic tree reconstruction. a case study for the bioinformatics domain to elaborate the Step 5:Evaluation of the phylogenetic tree.
    [Show full text]
  • Design and Implementation of a Workflow Oriented ERP System
    Design and Implementation of a Workflow Oriented ERP System B´alint Moln´ar1 and Zsigmond M´ari´as1,2 1Faculty of Informatics, E¨otv¨os Lor´and University (ELTE), H-1117 Budapest, P´azm´any P´eter s´et´any 1/C., Hungary 2LogiNet Systems Kft. H-1221 Budapest, Vihar u. 5/D., Hungary Keywords: Workflow, ERP Systems, Business Process Management, Business Process Modeling, Document Models, Data Flow. Abstract: Adaptation of enterprise resource planning systems to the frequently changing business environment and busi- ness processes require huge resources. That is why the demand was formulated for a method that enables introducing new features in software systems without any modification in program code. An adaptive ERP system cannot handle business processes and data flow as disjoint components. Therefore the proposed solu- tion is bifocal: an adaptive ERP system with highly integrated data flow and document management. In this article design and programming challenges are shown that had to be met during the development, focusing on topics of effective data storage and queries, workflow control structures and workflow evaluation techniques, document representation and the connection of workflow and data flow. 1 INTRODUCTION operation without any modification in program code would be highly advantageous. State of the art information systems used by organisa- tions in the business sector usually consist of built-in 1.2 The Idea of Workflow based software modules that implement company-specific business logic. As these organisations are constantly Systems facing new challenges, their business processes have to change frequently, therefore modifications of these The proposed workflow oriented ERP system’s capa- software systems are also often needed.
    [Show full text]
  • Business Process Modeling and Workflow Management
    Prof. Dr.-Ing. Stefan Deßloch AG Heterogene Informationssysteme Geb. 36, Raum 329 Tel. 0631/205 3275 [email protected] Chapter 6 – Business Process Modeling and Workflow Management Introduction & Motivation Business (Re-)Engineering Workflow Management Systems WF and Transactions Role of Workflow Technology Applications support business processes and have to ensure compliance with business processes => Application = Business Process + Business Functions Large applications often use special "control programs" to ensure the appropriate/correct sequencing of business functions Changes in how to perform business must be reflected as soon as possible in applications Requires code changes [which part to change?...], recompilation, redistribution of code,... to reflect new business processes What if users of standard applications want to reflect their own processes? very difficult, cumbersome, expensive (service specialists, consultancy) Consequence: Implementation of control programs via workflows Application consists of collection of business processes and collection of business functions (= "usual" programs) Business processes are enacted by workflow system that invoke business functions "appropriately", i.e. according to process model No coding,... to adapt application to changed business process 2 © Prof.Dr.-Ing. Stefan Deßloch Enterprise Information Systems WS 2010/2011 1 Historical Background Electronic document and folder routing in office automation Routing through enterprise's organizational structure Potential
    [Show full text]
  • Workflow Processing Using ERP Objects
    Acta Cybernetica 22 (2015) 183–210. Workflow processing using ERP Objects Attila Selmeci,∗ Tam´as Orosz,† and Istv´an Orosz‡ Abstract Enterprise Resources Planning (ERP) Systems, and Workflow Manage- ment Systems (WFMS) evolved parallel in the past. The main business drivers for automation came from the ERP world, but the WFMS solutions discovered their own way figuring out the necessity of such applications with- out ERP as well. In our paper we follow only the usage of workflows in ERP systems. The central elements of built-in workflows are the ERP ob- jects, which embed and handle the business data providing real life meaning of business objects as well. The capabilities of built-in workflow systems of such ERP solutions are presented via two market-leaders: SAP and Microsoft Dynamics AX. Both solutions are dealing with ERP objects in sence of the workflow management. We recognized and present the weaknesses and re- strictions of the built-in workflow systems on these two ERP examples. In our paper we describe the results of our analysis of the interoperability of the built-in workflow systems as well and demonstrate the required add-on func- tionalities to provide usable cross-system workflows in such an environment. We also mention the possibility of using a built-in workflow as a full-featured WFMS. Keywords: Workflow, ERP, BAPI, Business Evolution, Business Process Management (BPM) 1 Introduction Today’s World would stop, if ERP systems and other business application would not provide functionalities to manage and report data. In business application systems many business processes are running together, parallel and/or connecting to each other.
    [Show full text]
  • Visualization of Bioinformatics Workflows for Ease of Understanding and Design Activities
    Visualization of Bioinformatics Workflows for Ease of Understanding and Design Activities H. V. Byelas and M. A. Swertz Genomics Coordination Center, Department of Genetics, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands Keywords: Bioinformatics: Workflow Management System, Life Science Workflows, Workflow Visualization. Abstract: Bioinformatics analyses are growing in size and complexity. They are often described as workflows, with the workflow specifications also becoming more complex due to the diversity of data, tools, and computational resources involved. A number of workflow management systems (WMS) have been developed recently to help bioinformaticians in their workflow design activities. Many of these WMS visualize workflows as graphs, where the nodes are analysis steps and the edges are interactions and constraints between analysis steps. These graphs usually represent a data flow of the analysis. We know that in software visualization, similar graphs are used to show a data flow in software systems. However, the WMS do not use any widely accepted standards for workflow visualization, particularly not in the bioinformatics domain. As a result, workflows are visualized in different ways in different WMS and workflows describing the same analysis look different in different WMS. Furthermore, the visualization techniques used in WMS for bioinformatics are quite limited. Here, we argue that applying some of the visual analytics methods and techniques used in software field, such as UML (unified modelling language) diagrams combined with quality metrics, can help to enhance understanding and sharing of the workflow, and ease workflow analysis and design activities. 1 INTRODUCTION best served by visualizing the structure. Adding attributes such as ”quality” metrics to Software structure has been depicted with design di- this picture helps correlate the various quantita- agrams since the very start of computer program- tive insights with structural and architectural in- ming (Diehl, 2007).
    [Show full text]
  • Based Supply Chain Management System
    Analysis and Implementation of Workflow- based Supply Chain Management System Yan Tu ^and Baowen Sun ^ 1 Information School, Central University of Finance and Economics, Beijing, 100081, P.R.China,[email protected] 2 Department of Science and Research, Central University of Finance and Economics, Beijing, 100081, P.R.China,[email protected] Abstract. Because of lack of abstraction at the stage of requirement analysis, the traditional Supply Chain Management systems are not suitable to dynamic reengineering when business needs change. This paper will combine Workflow Technologies with Supply Chain Management systems (WSCM). By decomposing work into the corresponding tasks and roles and monitoring the sequence of work activities according to a defined set of rules, we can boost efficiency, reduce cost, and optimize performance of business processes management of Supply Chain. 1 Introduction As Internet technologies become commonplace in businesses, an emerging necessary characteristic of purchasing, logistics, and support activities is flexibility. Therefore, supply chain fianctions must operate in an integrated manner in order to opfimize performance. However, the dynamics of the organization and the market make this challenging. In many organizations, it is likely that the sequence of work activities on business process will be changed by customers while in process, and in most cases, these change requests are difficult to manage. In spite of the ubiquitous presence of information technology, the value that the traditional Supply Chain Management (SCM) software creates, and integration of business processes and enterprises, the challenges of implementing these applications do exist. The factors such as incompetent infrastructure, conflicting policies and standards, and changes about the sequence of work activities pose immense pressure for the organizations to move forward.
    [Show full text]
  • A Decentralized and Cooperative Workflow Scheduling Algorithm
    Eighth IEEE International Symposium on Cluster Computing and the Grid A Decentralized and Cooperative Workflow Scheduling Algorithm Rajiv Ranjan, Mustafizur Rahman, and Rajkumar Buyya Grid Computing and Distributed Systems (GRIDS) Laboratory Department of Computer Science and Software Engineering The University of Melbourne, Australia {rranjan, mmrahman, raj}@csse.unimelb.edu.au Workflow Workflow Abstract— In the current approaches to workflow scheduling, T1 T1 there is no cooperation between the distributed workflow brokers T2 T3 T2 T3 and as a result, the problem of conflicting schedules occur. To overcome this problem, in this paper, we propose a decentralized T4 T4 T1 R1 T1 R1 Workflow and cooperative workflow scheduling algorithm. The proposed T2 R1 T2 R1 Workflow Workflow Broker 1 approach utilizes a Peer-to-Peer (P2P) coordination space with T3 R2 T3 R2 Broker 2 Broker m T4 R3 T4 R3 respect to coordinating the application schedules among the Grid Query wide distributed workflow brokers. The proposed algorithm is Reply completely decentralized in the sense that there is no central Centralised point of contact in the system, the responsibility of the key GIS functionalities such as resource discovery and scheduling co- ordination are delegated to the P2P coordination space. With Update the implementation of our approach, not only the performance bottlenecks are likely to be eliminated but also efficient scheduling with enhanced scalability and better autonomy for the users are likely to be achieved. We prove the feasibility of our approach through an extensive trace driven simulation study. Resource 1 Resource 2 Resource n I. INTRODUCTION Fig. 1. Existing workflow scheduling approach.
    [Show full text]