arXiv:physics/0306002v1 [physics.comp-ph] 31 May 2003 otedt n oasto pt-aeaayi tools. analysis up-to-date of set a access to instantaneous and data almost have the software to to and expect con- hardware they experiment of and development per the physicists to 2000 tribute all world. each distributed the institutes have general over participating LHC CMS, 150 two than and The more ATLAS year. per experiments, collect data to purpose raw expected of is PByte experiment 1 Each The and GByte/sec. 1000 MByte/sec analysis. 1 100 than and between less is storage volume to final data equivalent rate for event second the per proces- reduce events to programmable aims in state-of-art sors, implemented on algorithms, filter and of hardware set A second. per etrwl bev uc olsosa aeo 4 of rate a at de- collisions each luminosity, bunch the observe design operation, will its of tector years at analysis few run of a and will after challenges acquisition accelerator When, the greatest data. be the the will of of project One LHC the others. dimen- extra and large sions in supersymmetry; CP-violation decays; boson; with Higgs B-meson mechanism the Higgs of discovery the like possible funda- experiments nature, the of of these frontiers laws new by mental of exploration collected for allow Data will heavy to and collisions. LHC proton-proton ion the in unprecedented. at produced are constructed events physicists observe be by will Chal- world faced experiments Four Geneva. be the in to - Par- (CERN) lenges European Laboratory (LHC) the Physics at deployment Collider ticle built and being Hadron construction accelerator, Large biggest of the stages of final the in Introduction 1. MOAT003 h odGi rhtcueadtools and architecture NorduGrid The h uoenHg nryPyiscmuiyis community Physics Energy High European The otcalnigtsso ihEeg Physics. Energy High of dyna tasks decentralized, and challenging non-invasive is light-weight, most a topology thus modi NorduGrid is not architecture The does NorduGrid co-exist. the tually services, new introducing While nbthpoesn utbefrpolm nonee nHi a in be encountered to Toolkit meant problems Globus is the for it uses suitable implementation While the processing experiments. with batch LHC architecture on the Grid of a tasks designed duction project NorduGrid The Bl 1048, Box O. P. Physics, of Department , of University Ould-Saada F. atcePyis nttt fPyis udUiest,B University, Ekel Lund T. Physics, of Institute Physics, Particle .R asn .L ile,A W A. Nielsen, L. J. Hansen, Box R. University, J. Uppsala Sciences, Radiation of Department nttt fMtra cec n ple eerh Saul Research, Bl Applied 1048, and Box Science O. P. Material Physics, of of Institute Department Oslo, of University Ble Geofysik, Konstantinov og A. Fysik Astronomi, for Institutet Bohr Niels .Erl,B K B. Eerola, P. f .Ellert M. of, ¨ na .Smirnova O. onya, ´ HP0,L ol,Clfri,2.-2.Mrh 2003 March, 28. - 24. California, Jolla, La CHEP’03, a ¨ TM an ¨ anen stefudto o aiu opnns eeoe ytep the by developed components, various for foundation the as ¨ · 10 7 i n,wierbs n clbe aal fmeeting of capable scalable, and robust while one, mic ..TeNrurdProject NorduGrid The 1.1. oe ont h lbsToolkit Globus the avail- to nar- were The down candidates rowed project. providing middleware the rather Grid in marked able software the any on develop solutions than existing adopt 2002. [1]) could of (see May Challenge in out Data test start ATLAS to first the was be The which to and was data emerging. technology case massive tasks new emerging intensive this the if computing solving tools out in GRID find help evaluate and could to scientists used the be by could and in that infrastructure , ) GRID-like (Denmark, countries a Nordic create the to was Computing goal Area The Wide the for as Testbed known Nordic also science project the NorduGrid from the emerging communities problems similar other and ob el iho ag aaGrid. data going large were a that on data with dealt of used be amounts be to large never could the that solving bottleneck in bro- serious resource a was centralized the ker their that by felt of faced was implementation problems it the Furthermore the challenges. solve data seriously ATLAS beginning to the in 2002 Globus enough of mature the soft- not of were EDG deficiencies but the The Toolkit, address data. to output seemed ware and input of possibility large Re- the Grid staging lacked their [4] and Manager brokering Allocation complete job source a no than is rather There tools of solution. box Globus a The mainly is inadequate. Toolkit be to project projects found two soon these DataGrid was however from middleware European The the [2][3]. (EDG) by developed ware x18 20 ud Sweden Lund, 22100 118, ox viigasnl on ffiue h NorduGrid The failure. of point single a avoiding stefcswso elyetw oe htwe that hoped we deployment on was focus the As LHC the of challenge computing the face to order In tkoa.9 inu 00 Lithuania 2040, Vilnius 9, al. etekio ˙ hEeg hsc.TeNrurdarchitecture NorduGrid The Physics. Energy gh yteGou ol,sc httetocneven- can two the that such tools, Globus the fy rmr olt etterqieet fpro- of requirements the meet to goal primary 3,711Upaa Sweden Uppsala, 75121 535, dmvj1,D-10Cpnae Ø, Copenhagen Dk-2100 17, gdamsvej nen 36Ol,Norway Oslo, 0316 indern, nen 36Ol,Nra n inu University, Vilnius and Norway Oslo, 0316 indern, ahrgnrcGi ytm tpt emphasis puts it system, Grid generic rather TM roject. n h soft- the and a setup. was 1 2 CHEP’03, La Jolla, California, 24. - 28. March, 2003

It was therefore decided that the NorduGrid project and queries the Information System (Section 3.3) and would create a Grid architecture from scratch. The the Replica Catalog (Section 3.2). implementation would use existing pieces of work- The UI comes simply as a client package (Sec- ing Grid middleware and develop the missing pieces tion 3.4), which can be installed on any machine – within the project in order to create a production Grid typically, an end-user desktop computer. Therefore, testbed. NorduGrid does not need a centralized resource bro- ker, instead, there are as many User Interfaces as users find convenient. 2. Architecture Information System within the NorduGrid is an- other key component, based on MDS [8] and realized The NorduGrid architecture was carefully planned as a distributed service (Section 3.3), which serves in- and designed to satisfy the needs of users and system formation for other components, such as a UI or a administrators simultaneously. These needs can be monitor (Section 3.6). outlined as a general philosophy: The Information System consists of a dynamic set of distributed databases which are coupled to computing • Start with simple things that work and proceed and storage resources to provide information on the from there status of a specific resource. When they are queried, the requested information is generated locally on the • Avoid architectural single points of failure resource (optionally cached afterward), therefore the information system follows a pull model. The local • Should be scalable databases register themselves via a soft-state regis- tration mechanism to a set of indexing services (reg- • Resource owners retain full control of their re- istries). The registries are contacted by the UI or sources monitoring agents in order to find contact informa- • As few site requirements as possible: tion of the local databases for a direct query. Computing Cluster is the natural computing – No dictation of cluster configuration or in- unit in the NorduGrid architecture. It consists of a stall method front-end node which manages several back-end nodes, – No dependence on a particular operating normally via a private closed network. The software system or version component consists of a standard batch system plus an extra layer to interface with the Grid. This layer • Reuse existing system installations as much as includes the Grid Manager (Section 3.1) and the local possible information service (Section 3.3). The operation system of choice is Linux in its many • The NorduGrid middleware is only required on flavors. Other Unix-like systems (e.g. HP-UX, Tru64 a front-end machine UNIX) can be utilized as well. The NorduGrid does not dictate configuration of • Compute nodes are not required to be on the the batch system, trying to be an ”add-on” component public network which hooks the local resources onto the Grid and allows for Grid jobs to run along with the conventional • Clusters need not be dedicated to Grid jobs jobs, respecting local setup and configuration policies. There are no specific requirements for the cluster 2.1. The NorduGrid System Components setup, except that there should be a shared file sys- tem (e.g. The Network File System - NFS) between The NorduGrid tools are designed to handle job the front-end and the back-end nodes. The back-end submission and management, user area management, nodes are managed entirely through the local batch some data management, and monitoring. Figure 1 system, and no NorduGrid middleware has to be in- depicts basic components of the NorduGrid architec- stalled on them. ture and schematic relations between them. In what Storage Element (SE) is a concept not fully devel- follows, detailed descriptions of the components are oped by the NorduGrid at this stage. So far, SEs are given. implemented as plain GridFTP [5] servers. A software User Interface (UI) is the major new service de- used for this is a GridFTP server, either provided as veloped by the NorduGrid, and is one of its key com- a part of the Globus ToolkitTM, or the one delivered ponents. It has the high-level functionality missing as a part of the NorduGrid Grid Manager (see Sec- from the Globus ToolkitTM, namely, that of resource tion 3.1). The latter is preferred, since it allows access discovery, brokering, Grid job submission and job sta- control based on the user’s Grid certificate instead of tus querying. To achieve this, the UI communicates the local identities to which users are mapped. with the NorduGrid Grid Manager (see Section 3.1) Replica Catalog (RC) is used for registering and

MOAT003 CHEP’03, La Jolla, California, 24. - 28. March, 2003 3

Figure 1: Components of the NorduGrid architecture. locating data sources. NorduGrid makes use of the 4. A user may request e-mail notifications about RC as developed by the Globus project [3], with minor the job status, or simply use the UI or monitor changes to improve functionality (see Section 3.2). RC (Section 3.6) to follow the job progress. Upon records are entered and used primarily by the GM and the job end, specified in the job description files its components, and can be used by the UI for resource can be retrieved by the user. If not fetched in brokering. 24 hours, they are erased by the local GM. In [1] this can be seen in a real production environ- 2.2. Task Flow ment.

The components described above are designed to 3. The NorduGrid Middleware support the task flow as follows: The NorduGrid middleware is almost entirely based 1. A user prepares a job description using the ex- TM tended Globus Resource Specification Language on the Globus Toolkit API, libraries and services. (RSL, Section 3.5). This description may in- In order to support the NorduGrid architecture, sev- clude application specific requirements, such as eral innovative approaches were used, such as the Grid input and output data description, as well as Manager (Section 3.1) and the User Interface (Sec- other options used in resource matching, such tion 3.4). Other components were extended and de- as architecture or an explicit cluster. veloped further, such as the Information Model (Sec- tion 3.3) and the Extended Resource Specification 2. The job description is interpreted by the UI, Language (Section 3.5). which makes resource brokering using the Infor- mation System and RC data, and forward the 3.1. Grid Manager job to the chosen cluster (see Section 3.4), even- tually uploading specified accompanying files. The NorduGrid Grid Manager (GM) software acts as a smart front-end for job submission to a cluster. It 3. The job request (in extended RSL format) is runs on the cluster’s front-end and provides job man- received by the GM which resides on cluster’s agement and data pre- and post-staging functionality, front-end. It handles pre-processing, job sub- with a support for meta-data catalogs. mission to the local system, and post-processing The GM is a layer above the Globus ToolkitTM li- (see Section 3.1), depending on the job specifi- braries and services. It was written because the ser- cations. Input and output data manipulations vices available as parts of the Globus ToolkitTM did are made by the GM with the help of RC’s and not meet the requirements of the NorduGrid architec- SE’s. ture (Section 2) at that time. Missing things included

MOAT003 4 CHEP’03, La Jolla, California, 24. - 28. March, 2003

integrated support for the RC, sharing cached files Additionally, the GM can cache input files and let among users, staging in/out data files, etc.. jobs and users share them. If the requested file is Since data operations are performed by an addi- available in the cache, it is still checked (if the protocol tional layer, the data are handled only at the begin- allows that) against the remote server whether a user ning and end of a job. Hence the user is expected is eligible to access it. To save the disk space, cached to provide complete information about the input and files can be provided to jobs as soft links. output data. This is the most significant limitation of The GM uses a bash-script Globus-like implemen- the used approach. tation of an interface to a local batch system. This Differently from the Globus Resource Allocation makes it easy to adapt the GM to inter-operate with Manager (GRAM) [4], the GM uses a GridFTP in- most LRMSs. terface for jobs submission. To do that, a NorduGrid The GM uses the file system to store the state of GridFTP server (GFS) was developed, based on the handled jobs. This allows it to recover safely from Globus libraries [5]. Its main features which makes it most system faults, after a restart. different from the Globus implementation are: The GM also includes user utilities to handle data transfer and registration in meta-data catalogs. • Virtual directory tree, configured per user. • Access control, based on the Distinguished 3.2. Replica Catalog Name stored in the user certificate. • The NorduGrid makes use of the Globus RC, which Local file system access, implemented through is based on an OpenLDAP [6] server with the de- loadable plug-ins (shared libraries). There are fault LDAP DatabaseManager back-end. There are two plug-ins provided with GFS: no significant changes introduced into the original – The local file access plug-in implements an Globus RC objects schema. Apparent OpenLDAP ordinary FTP server-like access, problems with transferring relatively big amounts of data over an authenticated/encrypted connection – The job submission plug-in provides an in- were fixed partially by applying appropriate patches, terface for submission and control of jobs and partially by automatic restart of the server. To- handled by the GM. gether with a fault tolerant behavior of the client part, this made the system usable. The GFS is also used by NorduGrid to create rel- To manage the information in the RC server, the atively easily configurable GridFTP based storage TM Globus Toolkit API and libraries were used. The servers (often called Storage Elements). only significant change was to add the possibility to The GM accepts job-submission scripts described in perform securely authenticated connections based on Globus RSL (Section 3.5) with a few new attributes the Globus GSI mechanism. added. For every job, the GM creates a separate directory (the session directory) and stores the input files into 3.3. Information System it. There is no single point (machine) that all the jobs have to pass, since the gathering of input data is done The NorduGrid Information System description ex- directly on a cluster front-end. ceeds the limits imposed by this publication. Input files local to the user interface are uploaded by The NorduGrid middleware implements a dynamic the UI. For remote input files the GM start a download distributed information system [7] which was cre- process that understand a variety of protocols: (eg. ated by extending the Monitoring and Discovery Ser- http, ftp, gridftp, rc, rsl, etc.). vices [8] (MDS) of the Globus ToolkitTM. The MDS Once all input files are ready the GM creates a job is an extensible framework for creating Grid informa- execution script and launches it using a Local Re- tion systems, and it is built upon the OpenLDAP [6] source Management System (LRMS). Such a script software. An MDS-based information system consists can perform additional actions, such as setting the en- of an information model (schema), local information vironment for third-party software packages requested providers, local databases, soft registration mecha- by a job. nism and information indices. NorduGrid extensions After a job has finished, all the specified output files and a specific setup, made MDS a reliable backbone of are transferred to their destinations, or are temporar- the information system. A detailed account of these ily stored in the session directory to allow users to modifications is given below. retrieve them later. A Grid information model should be a result If an output destination specifies a RC additional of a delicate design process of how to represent the registration is performed in the replica catalog. A resources and what is the best way to structure this schematic view of the GM mechanism is shown on information. In the used MDS-based system, the in- Figure 2. formation is being stored as attribute-value pairs of

MOAT003 CHEP’03, La Jolla, California, 24. - 28. March, 2003 5

Figure 2: Grid Manager architecture

LDAP entries, which are organized into a hierarchical ated. The NorduGrid providers create and populate tree. Therefore, the information model is technically the NorduGrid LDAP entries of the local database by formulated via an LDAP schema and the specification collecting information from the cluster’s batch system of the LDAP-tree. and the Grid Manager, among others, on the status Evaluation of the original MDS and the EDG of grid jobs, grid users and queuing system. schemas [8, 9] showed that they are not suitable for The local databases in the MDS-based system describing clusters, Grid jobs and Grid users simulta- are responsible for implementing the first-level caching neously. Therefore, NorduGrid designed its own in- of the providers’ output and answering queries by pro- formation model [7]. While Globus and other Grid viding the requested Grid information to the clients projects keep developing new schemas [10, 11], the through the LDAP protocol. Globus created a LDAP NorduGrid one is the only which has been deployed, back-end for this purpose called the GRIS (Grid Re- tested and used for a production facility. source Information Service). NorduGrid uses this The NorduGrid model [7] naturally describes the back-end as its local information database. Local main Grid components: computing clusters, Grid databases are configured to cache the output of the users and Grid jobs. The LDAP-tree of a cluster is NorduGrid providers for a certain period. shown in Fig. 3. In this model, a cluster entry de- Soft-state registration mechanism of MDS is scribes the hardware, software and middleware prop- used by the local databases to register their contact in- erties of a cluster. Grid-enabled queues are repre- formation into the registry services (indices) which sented by the queue entry, under which the authuser themselves can further register to other registries. The and job entries can be found. Every authorized Grid soft-state registration makes the Grid dynamic, allow- user has an entry under the queue. Similarly, every ing the resources to come and go. Utilization of the Grid job submitted to the queue is represented by a soft-state registration made it possible to create a spe- job entry. The job entries are generated on the execu- cific topology of indexed resources. tion cluster, this way implementing a distributed job Registries (or index services) are used to main- status monitoring system. The schema also describes tain dynamic lists of available resources, containing Storage Elements and Replica Catalogs, although in the contact information of the soft-state-registered lo- a simplistic manner. cal databases. It is also capable of performing queries The information providers are small programs by following the registrations and caching the results that generate LDAP entries upon a search request. of the search operations (higher-level caching mech- The NorduGrid information model requires its own anism). The Globus developed backend for the reg- set of information providers, therefore a set of such istries is called GIIS (Grid Information Index Service). programs which interface to the local system was cre- The higher-level caching is not used by the NorduGrid

MOAT003 6 CHEP’03, La Jolla, California, 24. - 28. March, 2003

Figure 3: The LDAP subtree corresponding to a cluster resource.

Project, and index services are working as simple dy- ngclean – delete the output from the cluster namic ”link catalogs”, thus reducing the overall load on the system. In the NorduGrid system, clients con- ngsync – recreate the user interface’s local informa- nect the index services only to find out the contact tion about running jobs information of the local databases (or lower level in- ngcopy – copy files to, from and between Storage dex services), then the resources are queried directly. Elements and Replica Catalogs The local databases and index services of the NorduGrid testbed are organized into a multi-level ngremove – delete files from Storage Elements and tree hierarchy. It attempts to follow a natural geo- Replica Catalogs graphical hierarchy, where resources belonging to the same country are grouped together and register to the More detailed information about these commands can country’s index service. These indices are further reg- be found in the User Interface user’s manual, dis- istering to the top level NorduGrid index services. In tributed with the NorduGrid toolkit [12]. order to avoid any single point of failure, NorduGrid When submitting a Grid job using ngsub, the user operates a multi-rooted tree with several top-level in- should describe the job using extended RSL (xRSL) dices. syntax (Section 3.5). This piece of xRSL should con- tain all the information needed to run the job (the name of the executable, the arguments to be used, 3.4. User Interface and resource etc.) It should also contain a set of requirements that brokering a cluster must satisfy in order to be able to run the job. The cluster can e.g. be required to have a certain The user interacts with the NorduGrid through a amount of free disk space available or have a particu- set of command line tools. There are commands for lar set of software installed. submitting a job, for querying the status of jobs and When a user submits a job using ngsub, the User clusters, for retrieving the data from finished jobs, for Interface contacts the Information System (see Sec- killing jobs etc. There are also tools that can be used tion 3.3): first to find available resources, and then for copying files to, from and between the Storage El- to query each available cluster in order to do require- ements and Replica Catalogs and for removing files ment matching. If the xRSL specification states that from them. The full list of commands is given below: some input files should be downloaded from a Replica Catalog, the Replica Catalog is contacted as well, in ngsub – job submission order to obtain information about these files. The User Interface then matches the requirements ngstat – show status of jobs and clusters specified in the xRSL with the information obtained from the clusters in order to select a suitable queue at ngcat – display stdout or stderr of a running job a suitable cluster. If a suitable cluster is found, the ngget – retrieve the output from a finished job job is submitted to that cluster. Thus, the resource brokering functionality is an integral part of the User ngkill – kill a running job Interface, and does not require an additional service.

MOAT003 CHEP’03, La Jolla, California, 24. - 28. March, 2003 7

3.5. Resource specification language rc://dc1.uio.no/2000/zebra/$(LNAM).zebra) (atlas.01101.his The NorduGrid architecture uses Globus rc://dc1.uio.no/2000/his/$(LNAM).his) RSL 1.0 [4] as the basis for the communication ($(LNAM).AMI language between users, UI and GM. This extended rc://dc1.uio.no/2000/ami/$(LNAM).AMI) functionality requires certain extensions to RSL. This ($(LNAM).MAG concerns not only the introduction of new attributes, rc://dc1.uio.no/2000/mag/$(LNAM).MAG)) but also the differentiation between the two levels of (jobname=$(LNAM)) the job options specifications: (runTimeEnvironment="DC1-ATLAS") • User-side RSL – the set of attributes specified A more detailed explanation can be found in [1]. by a user in a job description file. This file is Such an extended RSL appears to be sufficient for interpreted by the User Interface (Section 3.4), job description of desired complexity. The ease of and after the necessary modifications is passed adding new attributes is particularly appealing, and to the Grid Manager (GM, Section 3.1) NorduGrid is committed to use xRSL in further de- • GM-side RSL – the set of attributes pre- velopment. processed by the UI, and ready to be interpreted by the GM 3.6. Monitoring

This dual purpose of the RSL in the NorduGrid ar- The NorduGrid provides an easy-to-use monitoring chitecture, as well as re-defined and newly introduced tool, realized as a Web interface to the NorduGrid In- attributes, prompted NorduGrid to refer to the used formation System. This Grid Monitor allows brows- resource specification language as ”xRSL”, in order to ing through all the published information about the avoid possible confusion. xRSL uses the same syntax system, providing thus a real-time monitoring and a conventions as the core Globus RSL, although changes primary debugging for the NorduGrid. the meaning and interpretation of some attributes. The structure of the Grid Monitor to great extent For a detailed description, refer to the xRSL speci- follows that of the NorduGrid Information System. fications distributed with the NorduGrid toolkit [12]. For each class of objects, either an essential subset Most notable changes are those related to the file of attributes, or the whole list of them, is presented movement. The major challenge for NorduGrid ap- in an easily accessible inter-linked manner. This is plications is pre- and post-staging of considerable realized as a set of windows, each being associated amount of files, often of a large size. To reflect this, with a corresponding module. two new attributes were introduced in xRSL: input- The screen-shot of the main Grid Monitor window, Files and outputFiles. Each of them is a list of local- as available from the NorduGrid Web page, is shown remote file name or URL pairs. Local to the submis- in Fig. 4. Most of the displayed objects are linked to sion node input files are uploaded to the execution appropriate modules, such that with a simple mouse node by the UI; the rest is handled by the GM. The click, a user can launch another module window, ex- output files are moved upon job completion by the panding the information about the corresponding ob- GM to a specified location (Storage Element). If no ject or attribute. Each such window gives access to output location is specified, the files are expected to other modules in turn, providing thus a rather intu- be retrieved by a user via the UI. itive browsing. Several other attributes were added in xRSL, for The Web server that provides the Grid Monitor ac- convenience of users. A typical xRSL file is: cess runs on a completely independent machine, there- &(executable="ds2000.sh") fore imposing no extra load on the NorduGrid, apart (arguments="1101") of very frequent LDAP queries (default refresh time (join="yes") for a single window is 30 seconds). (rsl_substitution= ("TASK" "dc1.002000.simul")) (rsl_substitution= 3.7. Software configuration and ("LNAM" distribution "dc1.002000.simul.01101.hlt.pythia_jet_17")) (stdout=$(LNAM).log) Since the Grid is supposed to be deployed on many (inputfiles=("ds2000.sh" sites, it implies involvement of many site administra- http://www.nordugrid.org/$(TASK).NG.sh)) tors, of which not all may be Grid experts. There- (outputFiles= fore the configuration of the NorduGrid Toolkit was ($(LNAM).log made as simple as possible. It basically requires rc://dc1.uio.no/2000/log/$(LNAM).log) writing two configuration files: globus.conf and (atlas.01101.zebra nordugrid.conf. The globus.conf is used by the

MOAT003 8 CHEP’03, La Jolla, California, 24. - 28. March, 2003

Figure 4: The Grid Monitor.

globus-config package, which configures the informa- it contains the necessary Globus components, it can tion system of the Globus ToolkitTM from a single be used to interact with other Grids as well. file. This configuration scheme was developed as a joint effort of NorduGrid and EDG and thus is not NorduGrid-specific. The nordugrid.conf file is used Acknowledgments for configuring the various NorduGrid Toolkit compo- nents. The NorduGrid project was funded by the Nordic This approach proved to be very convenient and Council of Ministers through the Nordunet2 pro- allowed to set up sites as remote from Scandinavia as gramme and by NOS-N. The authors would like to Canada or Japan in a matter of hours, with little help express their gratitude to the system administrators from the NorduGrid developers. across the for their courage, patience The NorduGrid Toolkit is freely available via the and assistance in enabling the NorduGrid environ- Web site [12] as RPM distributions, source tar-balls ment. In particular, our thanks go to Ulf Mj¨ornmark as well as CVS snapshots and nightly builds. Fur- and Bj¨orn Lundberg of , Bj¨orn Nils- thermore, there is a stand-alone local client instal- son of NBI Copenhagen, Niclas Andersson and Leif lation, distributed as a tar-ball and designed to be a Nixon of NSC Link¨oping, Ake˚ Sandgren of HPC2N NorduGrid entry point, working out-of-the-box. Since Ume˚aand Jacko Koster of Parallab, Bergen.

MOAT003 CHEP’03, La Jolla, California, 24. - 28. March, 2003 9

References http://www.openldap.org. [7] B. K´onya, “The NorduGrid Information System”, [1] P. Eerola et. al., “Atlas Data Challenge 1 http://www.nordugrid.org/documents/ng-infosys.pdf. on NorduGrid”, in Proc. of the CHEP’03, [8] K. Czajkowski, S. Fitzgerald, I. Foster, C. PSN: MOCT011 (2003) Kesselman, “Grid Information Services for Dis- [2] “The DataGrid Project”, tributed Resource Sharing.”, Proceedings of the http://www.edg.org. Tenth IEEE International Symposium on High- [3] “The Globus Project”, http://www.globus.org. Performance Distributed Computing (HPDC-10), [4] K. Czajkowski, I. Foster, N. Karonis, C. Kessel- IEEE Press, 2001. man, S. Martin, W. Smith, S. Tuecke, “A Re- [9] M. Sgaravatto, “WP1 Inputs to the DataGrid source Management Architecture for Metacom- Grid Information Service Schema Specification”, puting Systems”, Proc. IPPS/SPDP ’98 Work- 01-TEN-0104-0 6, 2001 shop on Job Scheduling Strategies for Parallel [10] “The GLUE schema effort”, Processing, pg. 62-82, 1998. http://www.hicb.org/glue/glue-schema/schema.htm, [5] “GridFTP Protocol Specification”, W. Allcock, http://www.cnaf.infn.it/~sergio/datatag/glue/index.htm J. Bester, J. Bresnahan, A. Chervenak, L. Lim- [11] “The CIM-based Grid Schema Working Group”, ing, S. Meder, S. Tuecke, GGF GridFTP Working http://www.isi.edu/~flon/cgs-wg/index.htm. Group Document, 2002. [12] “Nordic Testbed for Wide Area Computing And [6] “Open source implementation of the Data Handling”, http://www.nordugrid.org. Lightweight Directory Access Protocol”,

MOAT003