AMP: A Science-driven Web-based Application for the TeraGrid Matthew Woitaszek Travis Metcalfe Ian Shorrock National Center for National Center for National Center for Atmospheric Research Atmospheric Research Atmospheric Research 1850 Table Mesa Drive 1850 Table Mesa Drive 1850 Table Mesa Drive Boulder, CO 80305 Boulder, CO 80305 Boulder, CO 80305 [email protected] [email protected] [email protected] ABSTRACT teroseismic Modeling Portal (AMP, http://amp.ucar.edu) The Asteroseismic Modeling Portal (AMP) provides a web- presents a web-based interface to the the MPIKAIA astero- based interface for astronomers to run and view simulations seismology pipeline [6] to a broad international community that derive the properties of Sun-like stars from observations of researchers, facilitating automated model execution and of their pulsation frequencies. In this paper, we describe simplifying data sharing among research groups. the architecture and implementation of AMP, highlighting While the MPIKAIA asteroseismology pipeline itself has the lightweight design principles and tools used to produce a been available to astronomers to download and run on their functional fully-custom web-based science application in less own resources for several years, its potential use for process- than a year. Targeted as a TeraGrid science gateway, AMP’s ing Kepler data provided compelling motivation to explore architecture and implementation are intended to simplify its presenting the model as a science gateway. The most sub- orchestration of TeraGrid computational resources. AMP’s stantial barriers to an astronomer running the model on a web-based interface was developed as a traditional stan- local resource are MPIKAIA’s high computational require- dalone database-backed web application using the Python- ments and straightforward but high-maintenance workflow. based Django web development framework, allowing us to Running a single MPIKAIA simulation requires propagating leverage the Django framework’s capabilities while cleanly several independent batches of MPI jobs and can consume separating the user interface development from the grid in- 512 processors for over a week of wall-clock time. More terface development. We have found this combination of importantly, the results of these asteroseismology simula- tools flexible and effective for rapid gateway development tions are of interest to an international community of re- and deployment. searchers. Presenting the model via a science gateway al- lows researchers without local resources to run the model, disseminates model results to the community without repe- Categories and Subject Descriptors tition, and produces a uniform analysis of asteroseismic data H.3.5 [Information Storage and Retrieval]: Online In- for many stars of interest. formation Services - Web-based services. The straightforward workflow implemented by AMP also provided an opportunity to develop a new science gateway 1. INTRODUCTION while exploring a new architecture, web application frame- work, and supporting technologies. One of the first steps In March 2009, NASA launched the Kepler satellite as when designing a science gateway is to select the collec- part of a mission to identify potentially habitable Earth- tion of technologies, such as frameworks and toolkits, that like planets. Kepler detects planets by observing extrasolar will be used to construct the gateway. As noted by M. transits–brief dips in observed brightness as a planet passes Thomas when similarly evaluating frameworks for science between its star and the satellite–that can be used to identify gateway development, gateways can be constructed using the size of the planet relative to the size of the star. How- tools that vary greatly in complexity and features, with the arXiv:1011.6332v1 [astro-ph.IM] 29 Nov 2010 ever, in order to calculate the absolute size of an extrasolar most feature-rich frameworks often introducing substantial planet, the size of the star must also be known. Asteroseis- development complexity [12]. Indeed, many of the prior mology can be used to determine the properties of Sun-like science gateway projects at the National Center for Atmo- stars from observations of their pulsation frequencies, yield- spheric Research (NCAR) followed the design pattern typi- ing the precise absolute size of a distant star and thus the cal of many gateways by using Java to implement complex absolute size of any detected extrasolar planets. The As- and highly-extensible service oriented architectures and web portals. Most notably divergent from our prior work [5], AMP does not use an application-specific service-oriented architecture and is not written in Java. For the design and implementation of AMP, our objective was to create a web-based science-driven application that peripherally used Grid technologies to enable the back-end use of supercomputing resources. We prioritized minimizing To appear in the Proceedings of the 5th Grid Computing Environments development time and complexity while retaining full cre- Workshop (GCE2009), Portland, Oregon, USA, 2009. ative control of the user interface by selecting the Django AMP supports both modes of execution from its web- rapid-development web framework and implementing the based user interface: running the forward model with spe- Grid functionality with command-line toolkit interfaces. cific model parameters (a “direct model run”), and executing Due to AMP’s computational requirements, AMP has been the GA to identify model parameters that produce observed designed since its inception to target TeraGrid resources. data (an “optimization run”). Direct model runs are trivial Many of the best practices and procedures for developing to configure and execute: they require five floating-point pa- and deploying science gateways on the TeraGrid were pro- rameters as input, take 10-15 minutes to execute on a single posed coincident with our initial exploration of targeting processor, and produce a few kilobytes of output. Opti- TeraGrid as AMP’s computational platform. As such, AMP mization runs are both more complex and computationally also provides an example of constructing a new science gate- intensive. way specifically for TeraGrid cyberinfrastructure rather than The optimization run workflow consists of an ensemble of the common case of extending an existing gateway to utilize independent GA runs, with each run requiring the execu- TeraGrid. AMP’s architecture separates the web-based user tion of multiple sequential tasks (see Figure 1). For each interface and the workflow system performing Grid opera- optimization run, multiple separate GAs are executed and tions, isolating interactive users both logically and physically allowed to converge independently. Each GA (and indeed from TeraGrid operations. We utilized only components each task) is started with randomly generated seed parame- common to all TeraGrid resource providers with the goal of ters to encourage the GA to explore a wide parameter space, facilitating easy deployment on current TeraGrid-managed avoid local minima, and provide confidence in the optimality resources without any resource provider assistance. of the final result. The GAs can take from hours to days to The remainder of this paper is organized as follows. Sec- converge depending on system performance and the number tion 2 describes the asteroseismology model workflow and of iterations requested, so a GA may not converge in a single computational requirements. Section 3 and 4 describe the task execution within the target supercomputer’s walltime architecture, design, and implementation of AMP. Section 5 limitations. Thus, each GA run may require several invo- discusses our experiences with AMP’s implementation em- cations of the executable to converge to a solution. When phasizing the potential usefulness of the design principles all of the GA runs in the ensemble are complete, the best for future gateway projects, and the paper concludes with solution is evaluated using the forward model to produce continuing and future work. detailed output for presentation and analysis. In the current configuration for the Kepler data analy- 2. BACKGROUND sis, each optimization run consists of four GA runs executed in parallel, and each GA models a population of 126 stars The asteroseismology workflow provided by AMP consists (using 128 processors) for 200 iterations. One interesting of two components: a forward stellar model and a genetic artifact of the ASTEC model is that the execution time algorithm (GA) that invokes the forward model as a sub- varies slightly depending on the target star’s characteristics. routine. The forward stellar model is the Aarhus Stellar During the first few iterations, some stars in the randomly Evolution Code (ASTEC) [4], a single-processor code that chosen population may take more time to model than oth- takes as input five floating-point physical parameters (mass, ers. Because the iteration is blocked on the completion of metallicity, helium mass fraction, and convective efficiency) all stars in the population, the iteration run time is set by and constructs a model of the star’s evolution through a the longest-running component star. However, as the model specified age. The output of the model includes observable continues and the population begins to converge, the model data such as the star’s temperature, luminosity, and pulsa- run time for each star also converges and the time to run tion frequencies. In addition to the scalar parameter output, each iteration decreases. Thus, the
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-