Arxiv:1410.7513V1 [Astro-Ph.IM] 28 Oct 2014
Learning from 25 years of the extensible N-Dimensional Data Format Tim Jennessa,∗, David S. Berryb, Malcolm J. Currieb, Peter W. Draperc, Frossie Economoud, Norman Graye, Brian McIlwrathf, Keith Shortridgeg, Mark B. Taylorh, Patrick T. Wallacef, Rodney F. Warren-Smithf aDepartment of Astronomy, Cornell University, Ithaca, NY 14853, USA bJoint Astronomy Centre, 660 N. A‘oh¯ok¯uPlace, Hilo, HI 96720, USA cDepartment of Physics, Institute for Computational Cosmology, University of Durham, South Road, Durham DH1 3LE, UK dLSST Project Office, 933 N. Cherry Ave, Tucson, AZ 85721, USA eSUPA School of Physics & Astronomy, University of Glasgow, Glasgow G12 8QQ, UK fRAL Space, STFC Rutherford Appleton Laboratory, Harwell Oxford, Didcot, Oxfordshire OX11 0QX, UK gAustralian Astronomical Observatory, 105 Delhi Rd, North Ryde, NSW 2113, Australia hH. H. Wills Physics Laboratory, Bristol University, Tyndall Avenue, Bristol, UK Abstract The extensible N-Dimensional Data Format (NDF) was designed and developed in the late 1980s to provide a data model suitable for use in a variety of astronomy data processing applications supported by the UK Starlink Project. Starlink applications were used extensively, primarily in the UK astronomical community, and form the basis of a number of advanced data reduction pipelines today. This paper provides an overview of the historical drivers for the development of NDF and the lessons learned from using a defined hierarchical data model for many years in data reduction software, data pipelines and in data acquisition systems. Keywords: data formats, data models, Starlink, History of computing 1. Introduction In this paper the term “data model” refers to the organization, naming and semantics of components in a hierarchy.
[Show full text]