
CHEP 04 Monday 27 September 2004 - Friday 01 October 2004 Interlaken, Switzerland Book of abstracts CHEP 04/Book of abstracts Monday 06 September 2004 CHEP 04 / Book of abstracts This document and all information concerning the conference programme can be accessed online via the CHEP04 web site: http://www.chep04.org. The information on the web will be updated in the event on any last minute changes in the programme. These will also be communicated directly at the conference. This document contains all abstracts of the talks and posters to be presented at CHEP 2004. They are ordered by Abstract Number. Information on when, and in which session, the presentation is due to be made is also given here for completeness. The Conference Programme should be consulted to view the complete schedule of talks and cross reference to this document made using the Abstract Number. Page 2 CHEP 04 / Book of abstracts 30 - Scatter and Gather Data Management Poster Session 2 - Wednesday 29 September 2004 10:00 Presenter: HANUSHEVSKY, Andrew Distributing data across multiple servers for later access as a cohesive whole is a well known technique for distributing load and, with replication, providing a higher degree of fault-tolerance. These techniques are widely used in Peer-to-Peer file sharing networks that have been in the forefront of data location and placement methodologies. However, many of the algorithms used in such architectures rapidly break down when servers cannot not only claim to have data but also make promises to host data that they do not yet have. This odd combination is a natural outcome of a multi-tiered storage hierarchy (e.g., disk and tape); endemic to HEP large-scale storage needs. This talk will focus on how the Open Load Balancing (OLB) system, developed for the BaBar experiment, deals with the ephemeral promise of future hosted data in a structured peer-to-peer network. At issue are server selection, server failure, information caching, data replication, and server duplication. The OLB deals with these issues as dynamically changing conditions that need to be continuously adjusted to minimize resource wastage and end-client failure. The system has been in use for several months supporting root file as well as Objectivity/DB access and has become invaluable in providing a robust data management environment for peta-bytes of data. 31 - Detector-independent vertex reconstruction toolkit (VERTIGO) Poster Session 3 - Thursday 30 September 2004 10:00 Presenter: Mr. WALTENBERGER, Wolfgang A proposal is made for the design and implementation of a detector-independent vertex reconstruction toolkit and interface to generic objects (VERTIGO). The first stage aims at re- using existing state-of-the-art algorithms for geometric vertex finding and fitting by both linear (Kalman filter) and robust estimation methods. Prototype candidates for the latter are a wide range of adaptive filter algorithms being developed for LHC/CMS, as well as proven ones (like ZVTOP of SLC/SLD).In a second stage, also kinematic constraints will be included for the benefit of complex multi-vertex topologies. The design is based on modern object-oriented techniques. A core (RAVE) is surrounded by a shell of abstract interfaces (using adaptors for access from/to the particular environment) and a set of analysis and debugging tools. The implementation follows an open source approach and is easily adaptable to future standards. Work has started with the development of a specialized visualisation tool, following the model- view-controller (MVC) paradigm; it is based on COIN3D and may also include interactivity by PYTHON scripting. A persistency storage solution, intended to provide a general data structure, was originally based on top of ROOT and is currently being extended for AIDA and XML compliance; interfaces to existing or future event reconstruction packages are easily implementable. Flexible linking to a math library is an important requirement; at present we use CLHEP, which could be replaced by a generic product. 32 - Development of algorithms for cluster finding and track reconstruction in the forward muon spectrometer of ALICE experiment Poster Session 3 - Thursday 30 September 2004 10:00 Presenter: Dr. CHABRATOVA, Galina A simultaneous track finding / fitting procedure based on Kalman filtering approach has been developed for the forward muon spectrometer of ALICE experiment. In order to improve the performance of the method in high-background conditions of the heavy ion collisions the "canonical" Kalman filter has been modified and supplemented by a "smoother" part. It is shown that the resulting "extended" Kalman filter gives better tracking results and offers higher flexibility. To further improve the tracking performance in a high occupancy environment a new algorithm for cluster / hit finding in cathode pad chambers of the muon spectrometer has been developed. It is based on the expectation maximization procedure for a shape deconvolution of overlapped clusters. It is demonstrated that the proposed method allows to reduce the loss of the coordinate reconstruction accuracy for high hit multiplicities and achieve better tracking results. Both the hit finding and track reconstruction algorithms have been implemented within the AliRoot software framework. 33 - A Condor-based, Grid-aware batch software for a large scale Linux Farm Poster Session 2 - Wednesday 29 September 2004 10:00 Presenter: Dr. WLODEK, Tomasz A description of a Condor-based, Grid-aware batch software system configured to function asynchronously with a mass storage system is presented. The software is currently used in a large Linux Farm (2700+ processors) at the RHIC and ATLAS Tier 1 Computing Facility at Brookhaven Lab. Design, scalability, reliability, features and support issues with a complex Condor-based batch system are addressed within the context of a Grid-like, distributed computing environment. Page 3 CHEP 04 / Book of abstracts 34 - PyBus -- A Python Software Bus Poster Session 3 - Thursday 30 September 2004 10:00 Presenter: LAVRIJSEN, Wim A software bus, just like its hardware equivalent, allows for the discovery, installation, configuration, loading, unloading, and run-time replacement of software components, as well as channeling of inter-component communication. Python, a popular open-source programming language, encourages a modular design on software written in it, but it offers little or no component functionality. However, the language and its interpreter provide sufficient hooks to implement a thin, integral layer of component support. This functionality can be presented to the developer in the form of a module, making it very easy to use. This paper describes a Python module, PyBus, with which the concept of a 'software bus' can be realised in Python. It demonstrates, within the context of the Atlas software framework Athena, how PyBus can be used for the installation and (run-time) configuration of software, not necessarily Python modules, from a Python application in a way that is transparent to the end-user. 38 - Experience producing simulated events for the DZero experiment on the SAM-Grid Distributed Computing Systems and Experiences - Wednesday 29 September 2004 15:20 Presenter: KENNEDY, Rob Most of the simulated events for the DZero experiment at Fermilab have been historically produced by the “remote― collaborating institutions. One of the principal challenges reported concerns the maintenance of the local software infrastructure, which is generally different from site to site. As the understanding of the community on distributed computing over distributively owned and shared resources progresses, it becomes increasingly interesting the adoption of grid technologies to address the production of montecarlo events for high energy physics experiments. The SAM-Grid is a software system developed at Fermilab, which integrates standard grid technologies for job and information management with SAM, the data handling system of the DZero and CDF experiments. During the past few months, this grid system has been tailored for the montecarlo production of DZero. Since the initial phase of deployment, this experience has exposed an interesting series of requirements to the SAM-Grid services, the standard middleware, the resources and their management and to the analysis framework of the experiment. As of today, the inefficiency due to the grid infrastructure has been reduced to as little as 1%. In this paper, we present our statistics and the "lesson learned" in running large high energy physics applications on a grid infrastructure. 43 - A distributed, Grid-based analysis system for the MAGIC telescope Distributed Computing Systems and Experiences - Monday 27 September 2004 15:40 Presenter: Dr. KORNMAYER, Harald The observation of high-energetic gamma-rays with ground based air cerenkov telescopes is one of the most exciting areas in modern astro particle physics. End of the year 2003 the MAGIC telescope started operation.The low energy threshold for gamma-rays together with different background sources leads to a considerable amount of data. The analysis will be done in different institutes spread over Europe. The production of Monte Carlo events including the simulation of Cerenkov light in the atmosphere is very computing intensive and another challenge for a collaboration like MAGIC. Therefore the MAGIC telescope collaborations will take the opportunity to use Grid technology to set up a distributed computational and data intensive analysis system
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages126 Page
-
File Size-