Proceedings of the 14th IASTED International Conference PARALLEL AND DISTRIBUTED COMPUTING AND SYSTEMS November 4-6, 2002, Cambridge, USA Climateprediction.net: Design Principles for Public-Resource Modeling Research David Stainforth1, Jamie Kettleborough2, Andrew Martin3, Andrew Simpson3, Richard Gillis3, Ali Akkas2, Richard Gault3, Mat Collins5, David Gavaghan3 and Myles Allen1 1Atmospheric, Oceanic and Planetary Physics, University of Oxford, UK, 2Rutherford Appleton Laboratory, Oxfordshire, UK, 3Oxford University Computing Laboratory, 5Dept. of Meteorology, Reading University, UK. ABSTRACT • To demonstrate the potential of distributed computing for Monte Carlo ensemble simulations of Climateprediction.net aims to harness the spare CPU chaotic systems with advanced geophysical fluid cycles of a million individual users’ PCs to run a massive dynamics computer codes. ensemble of climate simulations using an up-to-date, full resolution, three dimensional atmosphere-ocean climate The quantification of uncertainty in climate predictions model. The project has many similarities with other requires of order 1-2 million integrations of a complex public-resource high-throughput activities but is climate model. This is beyond the scope of conventional distinctive in a number of ways including the complexity supercomputing facilities but could be achieved using the of the computational task, its duration and system Grid concept of “high-throughput computing” [4] which demands, participant interaction, and data volume and has been demonstrated by a number of “public-resource” analysis procedures. These distinctive aspects, together projects such as SETI@home [5]. The principle is to with the documented experience of other Internet-based utilize idle processing capacity on PCs volunteered by distributed computing projects, lead to a number of design businesses and the general public. This project differs challenges. The analysis of these, and the resulting from previous projects in a number of ways including: the architecture, is presented here. The initial development volume and management of the distributed data and testing phase is complete and roll-out will begin generated; the scale of the data collection problem; the shortly. It is expected that the issues raised will be of complexity, granularity and duration of the computational relevance to an increasing number of similar projects. task; the necessity and desire to provide tools to maintain long term participant interest, and the need to provide a KEY WORDS participation package simple enough to be used by Grid Computing, Modeling and Simulation, Internet inexperienced PC users, possibly with slow Internet Computing, Parallel and Distributed Systems. connections. 1. Introduction The initial development phase is complete and beta testing is underway with a public launch planned for early The overall objectives of the climateprediction.net project 2003. This paper describes, in section 2, the motivation ([1], [2], [3]) are: for climateprediction.net. In section 3 the factors • To harness the power of idle home and business PCs influencing the overall design are presented. It is to provide the first fully probabilistic 50-year forecast anticipated that many of these factors will be similar for of human induced climate change based on a other modeling projects wishing to use the high- perturbed-physics ensemble simulation with a full- throughput, public-resource, public interest computing scale three-dimensional atmosphere-ocean general paradigm. Consequently it is believed that the architecture circulation model. of climateprediction.net, described in sections 4, 5, and 6 • To enhance public understanding of climate will serve as an initial attempt at overcoming the generic modeling and the nature of uncertainty in climate problems inherent in such computationally intensive prediction, enabling secondary- and tertiary-level distributed computing projects. students to perform their own climate modeling Issues of security and experimental integrity are major research projects, both as individuals and in elements of the project and are intrinsic to the distributed research teams. architectural design. However, their importance and • To provide a platform for the development and complexity are such that they will be presented in a demonstration of peer-to-peer data analysis and separate future paper. visualization software in the context of a massively distributed data archive. 373-161 32 2. Motivation For Climateprediction.net specific challenges posed by a modeling initiative of this sort. These challenges are described here, focusing on Recent studies have shown that atmosphere-ocean general issues related to the model, the architectural restrictions circulation models (AOGCMs) are capable of simulating posed by the size and location of the datasets, the some large-scale features of present-day climate and requirements of the software package and the necessity to recent climate change with remarkable accuracy (e.g. [6]). cater for public involvement All initial development has These models contain, however, large numbers of focused on a package suitable for Windows Operating adjustable parameters many of whose values are poorly Systems. This is justified by its ubiquity; statistics from constrained by the available data on the processes they are SETI@home show that Windows participants contribute supposed to represent and are known, individually, to more than ten times more than users of other O/Ss. A have a significant impact on simulated climate. The detailed comparison of the architecture of public resource practical question is therefore: to what extent might a projects is planned for a future paper. different choice of parameters or parameterizations result in an equally realistic simulation of 20th century climate 3.1. Model Preparation yet a different forecast for 21st century climate change? The first challenge of using high-throughput public- This is acknowledged by the Inter-Governmental Panel on resource computing for a complex modeling simulation, is Climate Change (IPCC) as one of the key outstanding the need to implement the model in question on personal priorities for climate research. computers running popular operating systems, in this case Windows. This will often be a non-trivial task. The codes Since AOGCMs contain large numbers of under- may have been developed over a number of years and be determined parameters whose impact is non-additive [7], designed to be run on parallel processor supercomputers. there is no alternative to a “brute force” exploration of Furthermore, user interface and analysis tools are often high-dimensional parameter space. This requires very UNIX based. These may be easy to port to an individual large numbers of long integrations of a complex climate Windows machine, by installing a UNIX shell, but this is model. Exhaustive sampling of 5 values of only 9 an unacceptable solution in the public-resource distributed parameters would require ~2M simulations. Intelligent computing context. sampling based on results as they are produced would reduce this figure, but the inclusion of additional Climateprediction.net is using versions of the Hadley parameters and the need to include initial condition and Centre Climate Model (HadCM3 ([9],[10]), HadSM3 [7]), forcing perturbations as well, easily leads to a demand for the global set-up of the UK Met Office Unified Model overall ensembles of 1 to 2 million independent (UM) [11]. This is one of the world’s foremost climate simulations. (See [3] for experimental details. See fig 3 of models and although designed to be run on MPP [3] for results from prototype software.) machines, a single processor version also exists. Nevertheless, compiler limitations and oddities of the Currently available supercomputing resources would be code, much of it originating as FORTRAN 77 or even completely inadequate for this task. However, the FORTRAN 66, meant that several months were necessary continuing increase in computing capacity of the average to implement the particular version required, under Linux. PC has meant that AOGCMs, which until recently could To run under Windows required further effort to extract only be sensibly run on supercomputers, can now be run the fundamental model code from the GUI set-up and the on commonly available desktop machines. This opens up wrapper scripts. As a result, the preparation of each the possibility of carrying out the above research by using specific model version is now done under a Linux high-throughput public-resource computing, which has implementation which includes various pre-processing been demonstrated by a number of projects, such as modules to create the desired FORTRAN code. This is SETI@home [5], FightAIDS@home and Compute then transferred to Windows and implemented within the Against Cancer [8]. The most successful have been those climateprediction.net package. The main difficulties that stimulate the public’s imagination and interest and in encountered in porting the code from Linux to Windows such situations tremendous computing capacity has been were the identification of suitable compiler options and accessed at relatively low cost. changes in the way environment variables are used. A suitable architecture for such a project requires careful 3.2. Architectural Design Issues design involving a grid of servers, with one or more A basic description of the experiments to be undertaken in controlling nodes. Developing such a scalable architecture climateprediction.net
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-