Lessons on Data Management Practices for Local Data Intermediaries
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
ISSN: 2348-1773 (Online) Volume 3, Issue 1 (January-June, 2016), Pp
Journal of Information Management ISSN: 2348-1765 (Print), ISSN: 2348-1773 (Online) Volume 3, Issue 1 (January-June, 2016), pp. 71-79 © Society for Promotion of Library Professionals (SPLP) http:// www.splpjim.org ______________________________________________________________________________ DATA CURATION: THE PROCESSING OF DATA Krishna Gopal Librarian , Kendriya Vidyalaya NTPC Dadri, GB Nagar [email protected] ABSTRACT Responsibility for data curation rests, in different ways, on a number of different professional roles. Increasingly, within the library, data curation responsibilities are being associated with specific jobs (with titles like “data curator” or “data curation specialist”), and the rise of specialized training programs within library schools has reinforced this process by providing a stream of qualified staff to fill these roles. At the same time, other kinds of library staff have responsibilities that may dovetail with (or even take the place of) these specific roles: for instance, metadata librarians are strongly engaged in curatorial processes, as are repository managers and subject librarians who work closely with data creators in specific fields. Different techniques and different organizations are engaged in data curation. Finally, it is increasingly being recognized that the data creators themselves (faculty researchers or library staff) have a very important responsibility at the outset: to follow relevant standards, to document their work and methods, and to work closely with data curation specialists so that their data is curatable over the long term. Keywords:Data Storage, Data Management, Data curation, Data annotation, Data preservation 1. INTRODUCTION Data Curation is a term used to indicate management activities related to organization and integration of data collected from various sources, annotation of the data, publication and presentation of the data such that the value of the data is maintained over time and the data remains available for reuse and preservation. -
The Construction of Open Data Portal Using DKAN for Integrate to Multiple Japanese Local Government Open Data *Toshikazu Seto 1 , Yoshihide Sekimoto 2
Free and Open Source Software for Geospatial (FOSS4G) Conference Proceedings Volume 16 Bonn, Germany Article 17 2016 The onsC truction of Open Data Portal using DKAN for nI tegrate to Multiple Japanese Local Government Open Data Toshikazu Seto Center for Spatial Information Science, the University of Tokyo Yoshihide Sekimoto Institute of Industrial Science, the University of Tokyo Follow this and additional works at: https://scholarworks.umass.edu/foss4g Part of the Computer and Systems Architecture Commons, and the Geographic Information Sciences Commons Recommended Citation Seto, Toshikazu and Sekimoto, Yoshihide (2016) "The onC struction of Open Data Portal using DKAN for Integrate to Multiple Japanese Local Government Open Data," Free and Open Source Software for Geospatial (FOSS4G) Conference Proceedings: Vol. 16 , Article 17. DOI: https://doi.org/10.7275/R5W957B0 Available at: https://scholarworks.umass.edu/foss4g/vol16/iss1/17 This Poster is brought to you for free and open access by ScholarWorks@UMass Amherst. It has been accepted for inclusion in Free and Open Source Software for Geospatial (FOSS4G) Conference Proceedings by an authorized editor of ScholarWorks@UMass Amherst. For more information, please contact [email protected]. Center for Spatial Information Science at The University of Tokyo The Construction of Open Data Portal using DKAN for Integrate to Multiple Japanese Local Government Open Data *Toshikazu Seto 1 , Yoshihide Sekimoto 2 *1: Center for Spatial Information Science, the University of Tokyo, 4-6-1, Komaba, Meguro-ku, Tokyo 153-8505, Japan, 153-8505 Email: [email protected] 2: Institute of Industrial Science, the University of Tokyo 3. -
Linkedpipes DCAT-AP Viewer: a Native DCAT-AP Data Catalog⋆
LinkedPipes DCAT-AP Viewer: A Native DCAT-AP Data Catalog? Jakub Klímek[0000−0001−7234−3051] and Petr Škoda[0000−0002−2732−9370] Charles University, Faculty of Mathematics and Physics Malostranské nám. 25, 118 00 Praha 1, Czech Republic [email protected] Abstract. In this demonstration we present LinkedPipes DCAT-AP Viewer (LP-DAV), a data catalog built to support DCAT-AP, the Eu- ropean standard for representation of metadata in data portals, and an application profile of the DCAT W3C Recommendation. We present its architecture and data loading process and on the example of the Czech National Open Data portal we show its main advantages compared to other data catalog solutions such as CKAN. These include the support for Named Authority Lists in EU Vocabularies (EU NALs), controlled vocabularies mandatory in DCAT-AP, and the support for bulk loading of DCAT-AP RDF dumps using LinkedPipes ETL. Keywords: catalog · DCAT · DCAT-AP · linked data 1 Introduction Currently, two worlds exist in the area of data catalogs on the web. In the first one there are a few well established data catalog implementations such as CKAN or DKAN, each with their data model and a JSON-based API for accessing and writing the metadata. In the second one, there is the Linked Data and RDF based DCAT W3C Recommendation [1] and its application profiles, such as the European DCAT-AP1, which are de facto standards for representation of metadata in data portals. The problem is that CKAN has been around for a while now, and is better developed, whereas DCAT is still quite new, with insufficient tooling support, nevertheless it is the standard. -
Download Date 04/10/2021 06:52:09
Reunión subregional de planificación de ODINCARSA (Red de Datos e Información Oceanográficos para las Regiones del Caribe y América del Sur), Universidad Autónoma de Baja California (UABC) Ensenada, Mexico, 7-10 December 2009, Item Type Report Publisher UNESCO Download date 04/10/2021 06:52:09 Item License http://creativecommons.org/licenses/by-nc/3.0/ Link to Item http://hdl.handle.net/1834/5678 Workshop Report No. 225 Informes de reuniones de trabajo Nº 225 Reunión subregional de planificación de ODINCARSA (Red de Datos e Información Oceanográficos para las Regiones del Caribe y América del Sur) Universidad Autónoma de Baja California (UABC) Ensenada (México) 7-10 de diciembre de 2009 ODINCARSA (Ocean Data and Information Network for the Caribbean and South America region) Latin America sub- regional Planning Meeting Universidad Autónoma de Baja California (UABC) Ensenada, Mexico, 7-10 December 2009 UNESCO 2010 IOC Workshop Report No. 225 Oostende, 23 February 2010 English and Spanish Workshop Participants For bibliographic purposes this document should be cited as follows: ODINCARSA (Ocean Data and Information Network for the Caribbean and South America region) Latin America sub-regional Planning Meeting, Universidad Autónoma de Baja California (UABC), Ensenada, Mexico, 7-10 December 2009 Paris, UNESCO, 23 February 2010 (IOC Workshop Report No. 225) (English and Spanish) La Comisión Oceanográfica Intergubernamental (COI) de la UNESCO celebra en 2010 su 50º Aniversario. Desde la Expedición Internacional al Océano Índico en 1960, en la que la COI asumió su función de coordinadora principal, se ha esforzado por promover la investigación de los mares, la protección del océano y la cooperación internacional. -
Central Archiving Platform (CAP)
Central Archiving Platform (CAP) Central Archiving Platform (CAP) is a complex integrated system for a long-term storage, processing, securing and usage of any digital data. CAP provides a solution for recording, collection, archiving and data protection as well as web-harvesting and web-archiving task options. The system can also be used as a basis for a systematic digitization of any kind of analog information. Beside the long-term data storage, the main benefit of CAP is the institutionalization of digital archives meeting international norms and standards. Therefore, a part of the solution is a broad know-how defining the legislation norms, data processing and methodology for long-term data storage, imparting of the information in the archive and further content handling. Benefits: long-term data storage and protection compliance with the international criteria top security level and standards conformity with the legislation norms, data modularity, scalability, robustness processing and methodology for content and expandability handling and management CAP DIAGRAM CAP SERVICES CAP provides long-term digital data archiving services meeting the OAIS standard (Open Archival Information System). The main purpose for such an archive is to keep the data secure, legible and understandable. Therefore, CAP’s focus is on monitoring the lifetime cycle of stored data formats and their conversion/emulation as well as their bit protection. Our system also provides a support for identification and selection of formats suitable for archiving, legislation support and data selection variability. The archived objects are secured and the solution is fully compliant with the legal norms on intellectual property. LogIstIcs Data delivery can be realized online or using a specialized logistics system that could be included in the CAP solution. -
The Envidat Concept for an Institutional Environmental Data
I Iosifescu Enescu, I, et al. 2018. The EnviDat Concept for an CODATA '$7$6&,(1&( S Institutional Environmental Data Portal. Data Science Journal, U -2851$/ 17: 28, pp. 1–17. DOI: https://doi.org/10.5334/dsj-2018-028 RESEARCH PAPER The EnviDat Concept for an Institutional Environmental Data Portal Ionuț Iosifescu Enescu1, Gian-Kasper Plattner1, Lucia Espona Pernas1, Dominik Haas-Artho1, Sandro Bischof1, Michael Lehning2,3 and Konrad Steffen1,3,4 1 Swiss Federal Institute for Forest, Snow and Landscape WSL, CH 2 WSL Institute for Snow and Avalanche Research SLF, CH 3 School of Architecture, Civil and Environmental Engineering, EPFL, CH 4 ETH Zurich, CH Corresponding author: Ionuț Iosifescu Enescu ([email protected]) EnviDat is the environmental data portal developed by the Swiss Federal Institute for Forest, Snow and Landscape Research WSL. The strategic initiative EnviDat highlights the importance WSL lays on Research Data Management (RDM) at the institutional level and demonstrates the commitment to accessible research data in order to advance environmental science. EnviDat focuses on registering and publishing environmental data sets and provides unified and efficient access to the WSL’s comprehensive reservoir of environmental monitoring and research data. Research data management is organized in a decentralized manner where the responsibility to curate research data remains with the experts and the original data providers. EnviDat supports data producers and data users in registration, documentation, storage, publi- cation, search and retrieval of a wide range of heterogeneous data sets from the environmental domain. Innovative features include (i) a flexible, three-layer metadata schema, (ii) an additive data discovery model that considers spatial data and (iii) a DataCRediT mechanism designed for specifying data authorship. -
Vittorio Cannas
Curriculum Vitae Vittorio Cannas INFORMAZIONI PERSONALI Vittorio Cannas Viale Giuseppe Serrecchia 16, 00015, Monterotondo (RM), Italia +39 06 90626740 +39 339 6071094 [email protected] [email protected] Sesso Maschile | Data di nascita 22/11/1968 | Nazionalità Italiana ESPERIENZA PROFESSIONALE Da 07/2015 Presidente SpacEarth Technology Srl (http://www.spacearth.net/ ) Spin-off dell’Istituto Nazionale di Geofisica e Vulcanologia(http://www.ingv.it/en/) Attività realizzate: ▪ Sviluppo business e networking con aziende e organismi di ricerca a livello internazionale nei settori: aerospazio, minerario e ambient. ▪ Gestione dei rapporti con reti di imprese: Cluster Aerospaziale della Sardegna, Associazione Lazio Connect (Settore aerospaziale), Rete d’imprese ATEN-IS Lazio nel settore aerospaziale, Associazione Italia-Cina CinItaly nei settori aerospaziale, ICT e Ambiente. ▪ Gestione progetti europei di R&S (H2020, ESA, EIT) nei settori Aerospazio, Minerario e Ambiente: ▪ Scrittura di business plan e analisi di mercato propedeutiche allo sviluppo business. ▪ Trasferimento tecnologico, brevettazione e valorizzazione dei risultati della ricerca ▪ Internazionalizzazione e strumenti di finanza innovativa ▪ Scouting, selezione e scrittura di proposte di progetto e offerte commerciali in risposta a bandi e gare regionali, nazionali ed internazionali. ▪ Esperto valutatore di progetti POR-FESR nel settore Smart Cities&Communities per la regione Sicilia. ▪ Docente di strumenti di finanza innovativa e di sviluppo manageriale per le start-up ▪ Docente di sistemi satellitari a supporto dell’agricoltura, dei rischi ambientali e del cambiamento climatico Attività o settore: Sviluppo business e formazione a livello nazionale ed internazionale verso clienti dei settori aerospazio, ambiente e minerario. Da 07/2014 Senior Advisor Leoni Corporate Advisors (http://www.corporate-advisors.eu/) – Milan, Italy. -
Towards a Harmonized Dataset Model for Open Data Portals
HDL - Towards a Harmonized Dataset Model for Open Data Portals Ahmad Assaf1;2, Rapha¨elTroncy1 and Aline Senart2 1 EURECOM, Sophia Antipolis, France, <[email protected]> 2 SAP Labs France, <[email protected]> Abstract. The Open Data movement triggered an unprecedented amount of data published in a wide range of domains. Governments and corpo- rations around the world are encouraged to publish, share, use and in- tegrate Open Data. There are many areas where one can see the added value of Open Data, from transparency and self-empowerment to improv- ing efficiency, effectiveness and decision making. This growing amount of data requires rich metadata in order to reach its full potential. This meta- data enables dataset discovery, understanding, integration and mainte- nance. Data portals, which are considered to be datasets' access points, offer metadata represented in different and heterogenous models. In this paper, we first conduct a unique and comprehensive survey of seven meta- data models: CKAN, DKAT, Public Open Data, Socrata, VoID, DCAT and Schema.org. Next, we propose HDL, an harmonized dataset model based on this survey. We describe use cases that show the benefits of providing rich metadata to enable dataset discovery, search and spam detection. Keywords: Dataset Metadata, Dataset Profile, Dataset Model, Data Quality 1 Introduction Open data is the data that can be easily discovered, reused and redistributed by anyone. It can include anything from statistics, geographical data, meteo- rological data to digitized books from libraries. Open data should have both legal and technical dimensions. It should be placed in the public domain un- der liberal terms of use with minimal restrictions and should be available in electronic formats that are non-proprietary and machine readable. -
Web Data Extraction, Applications and Techniques: a Survey
Web Data Extraction, Applications and Techniques: A Survey Emilio Ferraraa,∗, Pasquale De Meob, Giacomo Fiumarac, Robert Baumgartnerd aCenter for Complex Networks and Systems Research, Indiana University, Bloomington, IN 47408, USA bUniv. of Messina, Dept. of Ancient and Modern Civilization, Polo Annunziata, I-98166 Messina, Italy cUniv. of Messina, Dept. of Mathematics and Informatics, viale F. Stagno D'Alcontres 31, I-98166 Messina, Italy dLixto Software GmbH, Austria Abstract Web Data Extraction is an important problem that has been studied by means of different scientific tools and in a broad range of applications. Many approaches to extracting data from the Web have been designed to solve specific problems and operate in ad-hoc domains. Other approaches, instead, heavily reuse techniques and algorithms developed in the field of Information Extraction. This survey aims at providing a structured and comprehensive overview of the literature in the field of Web Data Extraction. We provided a simple classification framework in which existing Web Data Extraction applications are grouped into two main classes, namely applications at the Enterprise level and at the Social Web level. At the Enterprise level, Web Data Extraction techniques emerge as a key tool to perform data analysis in Business and Competitive Intelligence systems as well as for business process re-engineering. At the Social Web level, Web Data Extraction techniques allow to gather a large amount of structured data continuously generated and disseminated by Web 2.0, Social Media and Online Social Network users and this offers unprecedented opportunities to analyze human behavior at a very large scale. We discuss also the potential of cross-fertilization, i.e., on the possibility of re-using Web Data Extraction techniques originally designed to work in a given domain, in other domains. -
Recommendations for Open Data Portals: from Setup to Sustainability
This study has been prepared by Capgemini Invent as part of the European Data Portal. The European Data Portal is an initiative of the European Commission, implemented with the support of a consortiumi led by Capgemini Invent, including Intrasoft International, Fraunhofer Fokus, con.terra, Sogeti, 52North, Time.Lex, the Lisbon Council, and the University of Southampton. The Publications Office of the European Union is responsible for contract management of the European Data Portal. For more information about this paper, please contact: European Commission Directorate General for Communications Networks, Content and Technology Unit G.1 Data Policy and Innovation Daniele Rizzi – Policy Officer Email: [email protected] European Data Portal Gianfranco Cecconi, European Data Portal Lead Email: [email protected] Written by: Jorn Berends Wendy Carrara Wander Engbers Heleen Vollers Last update: 15.07.2020 www: https://europeandataportal.eu/ @: [email protected] DISCLAIMER By the European Commission, Directorate-General of Communications Networks, Content and Technology. The information and views set out in this publication are those of the author(s) and do not necessarily reflect the official opinion of the Commission. The Commission does not guarantee the accuracy of the data included in this study. Neither the Commission nor any person acting on the Commission’s behalf may be held responsible for the use, which may be made of the information contained therein. Luxembourg: Publications Office of the European Union, 2020 © European Union, 2020 OA-03-20-042-EN-N ISBN: 978-92-78-41872-4 doi: 10.2830/876679 The reuse policy of European Commission documents is implemented by the Commission Decision 2011/833/EU of 12 December 2011 on the reuse of Commission documents (OJ L 330, 14.12.2011, p. -
RDM Technical Infrastructure Components and Evaluations
RDM Technical Infrastructure Components and Evaluations John A. Lewis 13/11/2014 Contents RDM Technical Infrastructure Components ……………………………………………………………1 1. Integrated systems and integrating components …………………………………………………………1 2. Repository platforms …………………………………………………………………………………………………...2 3. Digital preservation (repository) systems and services ……………………………………………….4 4. ‘Archive Data’ storage ………………………………………………………………………………………………….6 5. ‘Active data’ management and collaboration platforms ……………………………………………..7 6. Catalogue software / Access platforms ………………………………………………………………………..9 7. Current Research Information Systems (CRIS)…………………………………………………………….10 8. Data management planning (DMP) tools ……………………………………………………………………11 9. Metadata Generators ………………………………………………………………………………………………….11 10. Data capture and workflow management systems …………………………………………………….11 11. Data transfer protocols ……………………………………………………………………………………………….14 12. Identifier services and identity components ………………………………………………………………14 13. Other software systems and platforms of interest …………………………………………………….16 Reviews, Evaluations and Comparisons of Infrastructure Components .………17 References …………………………………………………………………………………………………………………….22 RDM Technical Infrastructure Components Components of the RDM Infrastructures established by higher education institutions are briefly considered below. The component function, the software / platform underlying the component and component interoperability are described, any evaluations identified, and institutions employing the component, -
Design and Implementation of a Nosql Database for Decision Support in R&D Management
University of Magdeburg School of Computer Science Databases D B and Software S E Engineering Master's Thesis Design and Implementation of a NoSQL Database for Decision Support in R&D Management Author: Prem Sagar Jeevangekar April 23, 2019 Advisors: Prof. Dr. rer. nat. habil. Gunter Saake Dipl.-Inf. Wolfram Fenske Department of Computer Science Dr. rer. nat. Matthias Plaue MAPEGY GmbH Jeevangekar, Prem Sagar: Design and Implementation of a NoSQL Database for Decision Support in R&D Management Master's Thesis, University of Magdeburg, 2019. Abstract The concept of database is introduced in the early 1960s. The relational database came into the picture in the early 1970s. The relational database had a great im- pact on data storage ever since. However, due to an increase in data in the modern database world, it leads to developing more efficient database technologies to deal with the usage of exponentially increased data. When working with the structured datasets, relational databases are more reliable and efficient. However, the database lacks its efficiency when huge unstructured data is produced from real-world ap- plications. To overcome the problems faced by the relational databases, compa- nies started looking for more reliable, flexible, highly scalable and high-performance databases. In the early 2000s, the NoSQL databases were introduced and gained huge popularity in a short period of time. The main aim of the thesis is to design and implement the NoSQL database (Mon- goDB) and investigate its performance in terms of query execution speed. This thesis provides an overview of one of the relational database (PostgreSQL) and investigate the performance of NoSQL database (MongoDB).