Grid Data Management Systems & Services
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Grid Computing: What Is It, and Why Do I Care?*
Grid Computing: What Is It, and Why Do I Care?* Ken MacInnis <[email protected]> * Or, “Mi caja es su caja!” (c) Ken MacInnis 2004 1 Outline Introduction and Motivation Examples Architecture, Components, Tools Lessons Learned and The Future Questions? (c) Ken MacInnis 2004 2 What is “grid computing”? Many different definitions: Utility computing Cycles for sale Distributed computing distributed.net RC5, SETI@Home High-performance resource sharing Clusters, storage, visualization, networking “We will probably see the spread of ‘computer utilities’, which, like present electric and telephone utilities, will service individual homes and offices across the country.” Len Kleinrock (1969) The word “grid” doesn’t equal Grid Computing: Sun Grid Engine is a mere scheduler! (c) Ken MacInnis 2004 3 Better definitions: Common protocols allowing large problems to be solved in a distributed multi-resource multi-user environment. “A computational grid is a hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities.” Kesselman & Foster (1998) “…coordinated resource sharing and problem solving in dynamic, multi- institutional virtual organizations.” Kesselman, Foster, Tuecke (2000) (c) Ken MacInnis 2004 4 New Challenges for Computing Grid computing evolved out of a need to share resources Flexible, ever-changing “virtual organizations” High-energy physics, astronomy, more Differing site policies with common needs Disparate computing needs -
A Globus Primer Or, Everything You Wanted to Know About Globus, but Were Afraid to Ask Describing Globus Toolkit Version 4
A Globus Primer Or, Everything You Wanted to Know about Globus, but Were Afraid To Ask Describing Globus Toolkit Version 4 An Early and Incomplete Draft Please send comments, criticisms, and suggestions to: [email protected] Preface The Globus Toolkit (GT) has been developed since the late 1990s to support the development of service-oriented distributed computing applications and infrastructures. Core GT components address basic issues relating to security, resource access and management, data movement and management, resource discovery, and so forth. A broader “Globus universe” comprises numerous tools and components that build on core GT4 functionality to provide many useful application- level functions. These tools have been used to develop many Grid systems and applications. Version 4 of the Globus Toolkit, GT4, released in April 2005, represents a significant advance relative to the GT3 implementation of Web services functionality in terms of the range of components provided, functionality, standards conformance, usability, and quality of documentation. This document is intended to provide a first introduction to key features of both GT4 and associated tools, and the ways in which these components can be used to develop Grid infrastructures and applications. Its focus is on the user’s view of the technology and its application, and the practical techniques that should be employed to develop GT4-based applications. We discuss in turn the applications that motivate the development of GT4 and related tools; the four tasks involved in building Grids: design, deployment, application, operations; GT4 structure, including its Web services (WS) and pre-WS components; the Globus universe and its various components; GT4 performance, scalability, and functionality testing; porting to GT4 from earlier versions of GT; who’s using GT4; likely future directions, and a few words of history. -
Presentation Slides
RlRule-OiOrient edD dDtata M anagement Infrastructure Reagan W. Moore San Diego Supercomputer Center [email protected] http://www.sdsc.edu/srb Funding: NSF ITR / NARA San Diego Supercomputer Center University of California, San Diego Distributed Data Management • Driven by the goal of improving access to data, information, and knowledge Data grids for sharing data on an international scale Digital libraries for publishing data Persistent archives for preserving data Real-time sensor systems for recording data Collections for managing simulation output • Identified fundamental concepts required by generic distributed data management infrastructure Data virtualization - manage properties of a shared collection independently of the remote storage systems Trust virtualization - manage authentication, authorization, auditing, and accounting independently of the remote storage systems San Diego Supercomputer Center University of California, San Diego Extremely Successful • After initial design, worked with user communities to meet their data management requirements with the Storage Resource Broker (SRB) Used collaborations to fund the continued development Averaged 10-15 simultaneous collaborations for ten years Worked with: Astronomy Data grid Bio-informatics Digital library Ecology Collection Education Persistent archive Engineering Digital library Environmental science Data grid High energy physics Data grid Humanities Data Grid Medical community Digital library Oceanography Real time sensor data Seismology Digital library -
A Virtual Data Grid for LIGO
A Virtual Data Grid for LIGO Ewa Deelman, Carl Kesselman Information Sciences Institute, University of Southern California Roy Williams Center for Advanced Computing Research, California Institute of Technology Albert Lazzarini, Thomas A. Prince LIGO experiment, California Institute of Technology Joe Romano Physics, University of Texas, Brownsville Bruce Allen Physics, University of Wisconsin Abstract. GriPhyN (Grid Physics Network) is a large US collaboration to build grid services for large physics experiments, one of which is LIGO, a gravitational-wave observatory. This paper explains the physics and comput- ing challenges of LIGO, and the tools that GriPhyN will build to address them. A key component needed to implement the data pipeline is a virtual data service; a system to dynamically create data products requested during the various stages. The data could possibly be already processed in a certain way, it may be in a file on a storage system, it may be cached, or it may need to be created through computation. The full elaboration of this system will al- low complex data pipelines to be set up as virtual data objects, with existing data being transformed in diverse ways. This document is a data-computing view of a large physics observatory (LIGO), with plans for implementing a concept (Virtual Data) by the GriPhyN collaboration in its support. There are three sections: • Physics: what data is collected and what kind of analysis is performed on the data. • The Virtual Data Grid, which describes and elaborates the concept and how it is used in support of LIGO. • The GriPhyN Layer, which describes how the virtual data will be stored and han- dled with the use of the Globus Replica Catalog and the Metadata Catalog. -
Everything You Always Wanted to Know About the Grid and Never Dared to Ask
Everything you always wanted to know about the Grid and never dared to ask Tony Hey and Geoffrey Fox Outline • Lecture 1: Origins of the Grid – The Past to the Present (TH) • Lecture 2: Web Services, Globus, OGSA and the Architecture of the Grid (GF) • Lecture 3: Data Grids, Computing Grids and P2P Grids (GF) • Lecture 4: Grid Functionalities – Metadata, Workflow and Portals (GF) • Lecture 5: The Future of the Grid – e-Science to e-Business (TH) Lecture 1 Origins of the Grid – The Past to the Present [Grid Computing Book: Chs 1,2,3,4,5,6,36] Lecture 1 1. Licklider and the ARPANET 2. Technology Trends: Moore’s Law and all that 3. The Imminent Data Deluge 4. Grids, e-Science and Cyberinfrastructure 5. Early Attempts at Building Grids 6. The NASA Information Power Grid 7. Grids Today J.C.R. Licklider and the ARPANET • Licklider was an experimental psychologist who was recruited to head up two groups - ‘Behavioural Sciences’ and ‘Command and Control’ – at the US Advanced Research Projects Agency ARPA in 1962 • Had realized that in his research he had to spend most of his time organizing and manipulating data before he could do research so he brought with him to ARPA a vision Origins of the Internet • ‘Lick’ described his vision as follows: ‘If such a network as I envisage could be brought into operation, we could have at least four large computers, perhaps six or eight small computers, and a great assortment of disk files and magnetic tape units … all churning away’ • He had no idea how to build it so he funded Computer Science research at