
A mathematical programming- and simulation-based framework to evaluate cyberinfrastructure design choices Zhengchun Liu∗, Rajkumar Kettimuthu∗, Sven Leyffer∗, Prashant Palkary, and Ian Foster∗z ∗ Mathematics and Computer Science Division, Argonne National Laboratory, 9700 Cass Ave., Lemont, IL 60439, USA fzhengchun.liu, kettimut, leyffer, [email protected] y Industrial Engineering and Operations Research, Indian Institute of Technology Bombay, Mumbai, MH, India 400076 [email protected] z Department of Computer Science, University of Chicago, Chicago, IL 60637, USA Abstract—Modern scientific experimental facilities such as x- At experimental science facilities such as Argonne’s Ad- ray light sources increasingly require on-demand access to large- vanced Photon Source (APS), data collected at beamline scale computing for data analysis, for example to detect experi- experiments are typically processed either locally on work- mental errors or to select the next experiment. As the number of such facilities, the number of instruments at each facility, and the stations or at a central facility cluster—portions of which may scale of computational demands all grow, the question arises as to be dedicated to specific beamlines. Inevitably, demand often how to meet these demands most efficiently and cost-effectively. outstrips supply, slowing analysis and/or discouraging the use A single computer per instrument is unlikely to be cost-effective of advanced computational methods. In this study, our goal is because of low utilization and high operating costs. A single to ensure that compute resources are available on-demand to national compute facility, on the other hand, introduces a single point of failure and perhaps excessive communication costs. We provide rapid turnaround time, which is crucial to ensure that introduce here methods for evaluating these and other potential data collected are useful and usable. With current state-of-the- design points, such as per-facility computer systems and a art detectors, many beamlines already struggle to derive useful distributed multisite “superfacility.” We use the U.S. Department feedback rapidly with the locally available dedicated compute of Energy light sources as a use case and build a mixed-integer resources. Data rates are projected to increase considerably at programming model and a customizable superfacility simulator to enable joint optimization of design choices and associated many facilities in the next few years. Therefore, the ability operational decisions. The methodology and tools provide new to use high-performance computing (HPC) will be critical for insights into design choices for on-demand computing facilities these facilities. for real-time analysis of scientific experiment data. The simulator Real-time analysis is challenging because of the vast can also be used to support facility operations, for example by amounts of generated data that must be analyzed in short simulating the impact of events such as outages. amounts of time. Detector technology is improving at rates far greater than Moore’s law, and modern detectors can I. INTRODUCTION generate data at multiple gigabytes per second. The com- New experimental modalities can generate extremely large putational power required to extract useful information from quantities of data. The ability to make effective use of these these streaming datasets in real time almost always exceeds new capabilities depends on the linking of data generation the resources available at a beamline. Thus the use of remote with computation, for example to filter out important results, computing facilities for analysis is no longer a luxury but a identify experimental misconfigurations, or select the next necessity. experiment. X-ray light sources—crucial tools for addressing The use of remote computing facilities for real-time analysis grand challenge problems in the life sciences, energy, climate requires effective and reliable methods for on-demand acquisi- change, and information technology [1–3]—illustrate these tion of compute and network resources, efficient data stream- challenges well. A typical light source facility hosts dozens ing from beamline to compute facility, and analysis at data of beamlines, each with multiple instruments. Scientists who generation rates so that timely decisions can be taken. These run experiments at beamlines want to collect the maximum tasks are complicated by the potential for competing load from information possible about the sample under study in as little different experiments. Researchers have performed a variety time as possible. The ability to analyze data produced by of preliminary studies on these problems, including methods detectors in near-real time can enable optimized data collection for online analysis of data from microtomography [4, 5], high schemes, on-the-fly adjustments to experimental parameters, energy diffraction microscopy [6, 7], diffuse scattering [8], early detection of and response to errors (saving both beam continuous motion scan ptychography [9], and linking of time and scientist time), and ultimately improved productiv- ultrafast three-dimensional x-ray imaging and continuum sim- ity [4–10]. ulation [10]. These studies illustrate that users of light source facilities can understand their samples sufficiently during beam III. THE FRAMEWORK time experiments to adjust the experiment in order to increase When designing a cyberinfrastructure that caters to the on- scientific output—if enough CPU and network resources can demand requirements of one or more science communities or be marshaled on demand. But it is far from clear how one projects, several factors should be considered. One key factor should design a cyberinfrastructure that would allow these sci- is flexibility. For example, an experiment schedule that has ence workflows to marshal enough CPU and network resources some flexibility can be adjusted to minimize the peak demand. on demand. Thus, our framework has a demand optimization component In this work, we develop a framework to evaluate design that uses the flexibility in the demand to optimize the demand. choices for such a cyberinfrastructure. Our framework consists Finding an optimal design that meets the user requirements of a mathematical model, an optimization engine, and a dis- with the least budget would be ideal. Theoretically, one could crete event simulator. It allows the users to identify optimal de- use mathematical programming for this purpose, but this is an sign choices for a variety of objective functions and constraints NP-complete problem. Mathematical programming also has (using the mathematical optimization for a smaller problem other restrictions such as a simplified data transfer model space) and to evaluate how well they will work in practice and ideal scheduling based on a global vision of jobs. Thus, (using simulation for a larger problem space). The objective we include in our framework both mathematical program- function could be a combination of constraints (e.g., maximum ming and a discrete event simulator, so that mathematical processing delay, minimum resource utilization) and scientific programming can be applied on a subset of the problem and output (e.g., in terms of timeliness of data processing), system the top few most-optimal solutions can be evaluated on the reliability, and system cost. Our framework can be used to original (bigger) problem by using simulation (that is closer explore interesting tradeoffs. to reality) to identify a design that is most promising for a real-world environment. Figure 1 shows the architecture of II. MOTIVATION our framework. A. Mathematical problem description Various groups have demonstrated the connection of ex- perimental facilities with remote supercomputers using high- We are interested in formulating a mixed-integer program- speed network connections such as ESnet to process data ming (MIP) model that allows us to schedule jobs from from experimental facilities in near-real time [5, 7, 8, 11]. scientific facilities at HPC resources, taking into account the These demonstrations typically involved special arrangements network capacities and the resource capacities while ensuring with compute facility and network operators to obtain the that slowdown for a given percentile of the jobs is below a compute and network resources on demand. But such special certain threshold. The slowdown for a job is defined as the arrangements are not efficient or scalable. Moreover, the batch ratio of the time length between the start and finish of a job queue systems at supercomputers cannot meet the requirement and the run time of the job. of these experimental science workflows, which often require 1) Definition of Sets: We use calligraphic letters to describe compute resources within minutes or even seconds of data the sets in our mathematical model. becoming available. Sites; S; is the set of scientific facilities/sites that create New approaches are needed to provide the compute re- compute jobs. sources required by experimental facilities. For example, we Resources; R; is the set of HPC resources where the jobs are may build resources dedicated for a class of experimental processed. science workflows (e.g., light source workflows) and schedule JobIds; I; is a set of unique job identifiers (we use integers). experiments to maximize their utilization. Or, we may add Jobs; J ; is the set of jobs, where J ⊆ I × S. resources to the existing supercomputers, change scheduling Time; T ; is the set of (discretized) time intervals
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-