Von Wikidata

Total Page:16

File Type:pdf, Size:1020Kb

Von Wikidata EINFÜHRUNG Das Backbone von Wikidata Jan Kamlah 1 05.05.2020 Willkommen bei Wikibase! Freie Software für kollaborative Wissensmanagmentsysteme Jan Kamlah 2 05.05.2020 Willkommen bei Wikibase! Erweiterung für strukturierte Daten Freie Software für kollaborative Wissensmanagmentsysteme Jan Kamlah 2 05.05.2020 Willkommen bei Wikibase! … Erweiterung für strukturierte Daten Freie Software für kollaborative Wissensmanagmentsysteme Jan Kamlah 2 05.05.2020 “All-in-1” Full-Semantic-Technology-Stack Technologiestack Wikibase • Datenbank für strukturierte Daten • textbasierte Suche (Elasticsearch) Textbasierter • MediaWikiAPI Suchindex • SPARQL-Endpoint SPARQL • manuelle und maschinelle Schnittstellen Triplestore Strukturierte Abfrageindex Daten MediaWiki API SPARQL API Schnittstellen GUI Bots OpenRefine Jan Kamlah 3 05.05.2020 “All-in-1” Full-Semantic-Technology-Stack Technologiestack Wikibase • Datenbank für strukturierte Daten • textbasierte Suche (Elasticsearch) Textbasierter • MediaWikiAPI Suchindex • SPARQL-Endpoint SPARQL • manuelle und maschinelle Schnittstellen Key-Features Triplestore • modernes Linked Open Data (LOD) Datenmodel Strukturierte Abfrageindex • kollaboratives Arbeiten Daten • Agile Ontologieentwicklung • Just-in-Time Entwicklung des Vokabulars • Verwaltung der eigenen Instanz (Rechtemangament?, ...) • Nachfolgbarkeit von Änderungen (Versionierung) MediaWiki API SPARQL API • Referenzierung der Aussagen (Provenienz) • aktive Community mit viel Erfahrung • stabile und gut dokumnetierte Software Schnittstellen • viele Werkzeuge (meistens Open-Source) • OpenRefine GUI Bots OpenRefine • Wikimedia Foundation Jan Kamlah 3 05.05.2020 Konzept: RDF-Triple-Modell (Beispiel) DINI ist ein e.V. Jan Kamlah 4 05.05.2020 Konzept: RDF-Triple-Modell (Beispiel) DINI ist ein e.V. Subjekt Prädikat Objekt Jan Kamlah 4 05.05.2020 RDF-Triple-Modell + Qualifikator + Fundstelle (Beispiel) DINI ist ein e.V. seit 2002 Subjekt Prädikat Objekt https://dini.de/dini/geschichte/ Jan Kamlah 4 05.05.2020 RDF-Triple-Modell + Qualifikator + Fundstelle (Beispiel) DINI ist ein e.V. seit 2002 Subjekt Prädikat Objekt Qualifikator https://dini.de/dini/geschichte/ Fundstelle Jan Kamlah 4 05.05.2020 RDF-Triple-Modell + Qualifikator + Fundstelle (Beispiel) Rechtsform DINI ist ein e.V. seit 2002 Subjekt Prädikat Objekt Qualifikator https://dini.de/dini/geschichte/ Fundstelle Jan Kamlah 4 05.05.2020 Daten-Modell in Wikibase 1 Jan Kamlah 5 05.05.2020 Daten-Modell in Wikibase Bezeichnung (label) Eindeutiger Bezeichner (identifier) Beschreibung (description) Aussage 1 (statements) Jan Kamlah 5 05.05.2020 Daten-Modell in Wikibase Bezeichnung (label) Eindeutiger Bezeichner (identifier) Beschreibung (description) Aussage 1 (statements) Eigenschaft Wert (value) (property) Qualifikator (qualifier) Fundstelle (reference) Jan Kamlah 5 05.05.2020 Projekte Jan Kamlah 6 05.05.2020 Quellenangabe Wikibase-Logo: https://commons.wikimedia.org/wiki/File:Wikibase_logo.svg Wikimedia-Logo: https://upload.wikimedia.org/wikipedia/commons/3/31/Wikimedia_Foundation_logo_-_vertical.svg MediaWiki-Logo: https://upload.wikimedia.org/wikipedia/commons/a/a3/MediaWiki_logo_1.png Wikidata-Logo: https://upload.wikimedia.org/wikipedia/commons/6/66/Wikidata-logo-en.svg Jan Kamlah 7 05.05.2020.
Recommended publications
  • GND Meets Wikibase 2 | 18 | Gndxwikibase | CLARIN | 03.09.2020
    1 | 18 | GNDxWikibase | CLARIN | 03.09.2020 Barbara Fischer | Sarah Hartmann GND meets Wikibase 2 | 18 | GNDxWikibase | CLARIN | 03.09.2020 GND meets Wikibase - We want to make our free structured authority data easier accessible and interoperable - We are testing Wikibase on its functionality as toolkit for regulations Blog post 3 | 18 | GNDxWikibase | CLARIN | 03.09.2020 Gemeinsame Normdatei (GND) – authority file used by CHI (mainly libraries) in D-A-CH - 16 mio identifiers referring to persons, (names of persons), corporate bodies, conferences, geographic names, subject headings, works – run cooperativley by GND agencies - active user: ~1.000 institutions – Open data (CC0), APIs and documentation – opening up to GLAM, science and others - the handy tool of librarians has to evolve into a cross domain tool: organization; data model; infrastructure & community building 4 | 18 | GNDxWikibase | CLARIN | 03.09.2020 On Wikibase - Open source on behalf of the Wikimedia Foundation - Developed by staff of Wikimedia Deutschland e.V. - Based on Mediawiki - An extension basically serving Wikidata needs - Yet on the very start to become a standardized product 5 | 18 | GNDxWikibase | CLARIN | 03.09.2020 GND meets Wikibase The project is an evaluation in two steps - Part one: proof of concept - Part two: testing the capacity Blog post 6 | 18 | GNDxWikibase | CLARIN | 03.09.2020 Proof of concept – Questions in 2019 – Is Wikibase convenient for the collaborative production and maintainance of authority data? - Both actual GND and „GND2.0“ – Will Wikibase
    [Show full text]
  • Creating Library Linked Data with Wikibase: Lessons Learned from Project Passage
    OCLCOCLC RESEARCH RESEARCH REPORT REPORT Creating Library Linked Data with Wikibase Lessons Learned from Project Passage Jean Godby, Karen Smith-Yoshimura, Bruce Washburn, Kalan Knudson Davis, Karen Detling, Christine Fernsebner Eslao, Steven Folsom, Xiaoli Li, Marc McGee, Karen Miller, Honor Moody, Craig Thomas, Holly Tomren Creating Library Linked Data with Wikibase: Lessons Learned from Project Passage Jean Godby OCLC Research Karen Smith-Yoshimura OCLC Research Bruce Washburn OCLC Research Kalan Knudson Davis University of Minnesota Karen Detling National Library of Medicine Christine Fernsebner Eslao Harvard University Steven Folsom Cornell University Xiaoli Li University of California, Davis Marc McGee Harvard University Karen Miller Northwestern University Honor Moody Harvard University Craig Thomas Harvard University Holly Tomren Temple University © 2019 OCLC Online Computer Library Center, Inc. This work is licensed under a Creative Commons Attribution 4.0 International License. http://creativecommons.org/licenses/by/4.0/ August 2019 OCLC Research Dublin, Ohio 43017 USA www.oclc.org ISBN: 978-1-55653-135-4 DOI: 10.25333/faq3-ax08 OCLC Control Number: 1110105996 ORCID iDs Jean Godby https://orcid.org/0000-0003-0085-2134 Karen Smith-Yoshimura https://orcid.org/0000-0002-8757-2962 Bruce Washburn http://orcid.org/0000-0003-4396-7345 Kalan Knudson Davis https://orcid.org/0000-0002-1032-6042 Christine Fernsebner Eslao https://orcid.org/0000-0002-7621-916X Steven Folsom https://orcid.org/0000-0003-3427-5769 Xiaoli Li https://orcid.org/0000-0001-5362-2151 Marc McGee https://orcid.org/0000-0001-5757-1494 Karen Miller https://orcid.org/0000-0002-9597-2376 Craig Thomas https://orcid.org/0000-0002-4027-7907 Holly Tomren https://orcid.org/0000-0002-6062-1138 Please direct correspondence to: OCLC Research [email protected] Suggested citation: Godby, Jean, Karen Smith-Yoshimura, Bruce Washburn, Kalan Knudson Davis, Karen Detling, Christine Fernsebner Eslao, Steven Folsom, Xiaoli Li, Marc McGee, Karen Miller, Honor Moody, Craig Thomas, and Holly Tomren.
    [Show full text]
  • Working-With-Mediawiki-Yaron-Koren.Pdf
    Working with MediaWiki Yaron Koren 2 Working with MediaWiki by Yaron Koren Published by WikiWorks Press. Copyright ©2012 by Yaron Koren, except where otherwise noted. Chapter 17, “Semantic Forms”, includes significant content from the Semantic Forms homepage (https://www. mediawiki.org/wiki/Extension:Semantic_Forms), available under the Creative Commons BY-SA 3.0 license. All rights reserved. Library of Congress Control Number: 2012952489 ISBN: 978-0615720302 First edition, second printing: 2014 Ordering information for this book can be found at: http://workingwithmediawiki.com All printing of this book is handled by CreateSpace (https://createspace.com), a subsidiary of Amazon.com. Cover design by Grace Cheong (http://gracecheong.com). Contents 1 About MediaWiki 1 History of MediaWiki . 1 Community and support . 3 Available hosts . 4 2 Setting up MediaWiki 7 The MediaWiki environment . 7 Download . 7 Installing . 8 Setting the logo . 8 Changing the URL structure . 9 Updating MediaWiki . 9 3 Editing in MediaWiki 11 Tabs........................................................... 11 Creating and editing pages . 12 Page history . 14 Page diffs . 15 Undoing . 16 Blocking and rollbacks . 17 Deleting revisions . 17 Moving pages . 18 Deleting pages . 19 Edit conflicts . 20 4 MediaWiki syntax 21 Wikitext . 21 Interwiki links . 26 Including HTML . 26 Templates . 27 3 4 Contents Parser and tag functions . 30 Variables . 33 Behavior switches . 33 5 Content organization 35 Categories . 35 Namespaces . 38 Redirects . 41 Subpages and super-pages . 42 Special pages . 43 6 Communication 45 Talk pages . 45 LiquidThreads . 47 Echo & Flow . 48 Handling reader comments . 48 Chat........................................................... 49 Emailing users . 49 7 Images and files 51 Uploading . 51 Displaying images . 55 Image galleries .
    [Show full text]
  • Analyzing Wikidata Transclusion on English Wikipedia
    Analyzing Wikidata Transclusion on English Wikipedia Isaac Johnson Wikimedia Foundation [email protected] Abstract. Wikidata is steadily becoming more central to Wikipedia, not just in maintaining interlanguage links, but in automated popula- tion of content within the articles themselves. It is not well understood, however, how widespread this transclusion of Wikidata content is within Wikipedia. This work presents a taxonomy of Wikidata transclusion from the perspective of its potential impact on readers and an associated in- depth analysis of Wikidata transclusion within English Wikipedia. It finds that Wikidata transclusion that impacts the content of Wikipedia articles happens at a much lower rate (5%) than previous statistics had suggested (61%). Recommendations are made for how to adjust current tracking mechanisms of Wikidata transclusion to better support metrics and patrollers in their evaluation of Wikidata transclusion. Keywords: Wikidata · Wikipedia · Patrolling 1 Introduction Wikidata is steadily becoming more central to Wikipedia, not just in maintaining interlanguage links, but in automated population of content within the articles themselves. This transclusion of Wikidata content within Wikipedia can help to reduce maintenance of certain facts and links by shifting the burden to main- tain up-to-date, referenced material from each individual Wikipedia to a single repository, Wikidata. Current best estimates suggest that, as of August 2020, 62% of Wikipedia ar- ticles across all languages transclude Wikidata content. This statistic ranges from Arabic Wikipedia (arwiki) and Basque Wikipedia (euwiki), where nearly 100% of articles transclude Wikidata content in some form, to Japanese Wikipedia (jawiki) at 38% of articles and many small wikis that lack any Wikidata tran- sclusion.
    [Show full text]
  • Extending Semantic Mediawiki for Interoperable Biomedical Data
    Lampa et al. Journal of Biomedical Semantics (2017) 8:35 DOI 10.1186/s13326-017-0136-y SOFTWARE Open Access RDFIO: extending Semantic MediaWiki for interoperable biomedical data management Samuel Lampa1* , Egon Willighagen2, Pekka Kohonen3,4, Ali King5, Denny Vrandeciˇ c´6, Roland Grafström3,4 and Ola Spjuth1 Abstract Background: Biological sciences are characterised not only by an increasing amount but also the extreme complexity of its data. This stresses the need for efficient ways of integrating these data in a coherent description of biological systems. In many cases, biological data needs organization before integration. This is not seldom a collaborative effort, and it is thus important that tools for data integration support a collaborative way of working. Wiki systems with support for structured semantic data authoring, such as Semantic MediaWiki, provide a powerful solution for collaborative editing of data combined with machine-readability, so that data can be handled in an automated fashion in any downstream analyses. Semantic MediaWiki lacks a built-in data import function though, which hinders efficient round-tripping of data between interoperable Semantic Web formats such as RDF and the internal wiki format. Results: To solve this deficiency, the RDFIO suite of tools is presented, which supports importing of RDF data into Semantic MediaWiki, with metadata needed to export it again in the same RDF format, or ontology. Additionally, the new functionality enables mash-ups of automated data imports combined with manually created data presentations. The application of the suite of tools is demonstrated by importing drug discovery related data about rare diseases from Orphanet and acid dissociation constants from Wikidata.
    [Show full text]
  • Using the W3C Generating RDF from Tabular Data on the Web Recommendation to Manage Small Wikidata Datasets
    Using the W3C Generating RDF from Tabular Data on the Web Recommendation to manage small Wikidata datasets Steven J. Baskaufa,* and Jessica K. Baskaufb a Jean and Alexander Heard Libraries, Vanderbilt University, Nashville, Tennessee, USA E-mail: [email protected], https://orcid.org/0000-0003-4365-3135 b Carleton ColleGe, Northfield, Minnesota, USA1 https://orcid.org/0000-0002-1772-1045 Editor(s): Jose Emilio Labra Gayo, University of Oviedo, Spain; Anastasia Dimou, IDLab, Ghent University, Belgium; Katherine Thornton, Yale University Library, USA; Anisa Rula, University of Milano-Bicocca, Italy and University of Bonn, Germany Solicited reviews: Jakub Klimek, Charles University, Czech Republic; John Samuel, CPE Lyon, France; Andra Waagmeester, Maastricht University, Netherlands; Tom Baker, Sungkyunkwan University, South Korea; Dimitris Kontokostas, Universität Leipzig, Germany Abstract. The W3C Generating RDF from Tabular Data on the Web Recommendation provides a mechanism for mapping CSV-formatted data to any RDF graph model. Since the Wikibase data model used by Wikidata can be expressed as RDF, this Recommendation can be used to document tabular snapshots of parts of the Wikidata knowledge graph in a simple form that is easy for humans and applications to read. Those snapshots can be used to document how subgraphs of Wikidata have changed over time and can be compared with the current state of Wikidata using its Query Service to detect vandalism and value added through community contributions. Keywords: CSV file, Wikibase model, SPARQL 1. Introduction for using Wikidata as a place to expose and manage data about items of their concern, such as collections Because of its availability and ease of use, Wikidata records, authors, and authority files.
    [Show full text]
  • Archiving Complex Digital Artworks
    UvA-DARE (Digital Academic Repository) Archiving complex digital artworks Barok, D.; Boschat Thorez, J.; Dekker, A.; Gauthier, D.; Roeck, C. DOI 10.1080/19455224.2019.1604398 Publication date 2019 Document Version Final published version Published in Journal of the Institute of Conservation License CC BY Link to publication Citation for published version (APA): Barok, D., Boschat Thorez, J., Dekker, A., Gauthier, D., & Roeck, C. (2019). Archiving complex digital artworks. Journal of the Institute of Conservation, 42(2), 94-113. https://doi.org/10.1080/19455224.2019.1604398 General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. UvA-DARE is a service provided by the library of the University of Amsterdam (https://dare.uva.nl) Download date:29 Sep 2021 Journal of the Institute of Conservation, 2019 Vol. 42, No. 2, 94–113, https://doi.org/10.1080/19455224.2019.1604398 Dušan Barok , Julie Boschat Thorez , Annet Dekker, David Gauthier and Claudia Roeck Archiving complex digital artworks Abstract The transmission of the documentation of changes made in each presentation of an artwork and the motivation behind each display are of importance to the continued preservation, re-exhibition and future understanding of artworks.
    [Show full text]
  • Semantic Mediawiki Database Schema
    Semantic Mediawiki Database Schema Matthaeus is syzygial and idolatrise impurely while ineluctable Webb stumble and creams. Cernuous and aloetic Templeton fidges so groundlessly that Francois interfused his tela. Elliptic and unpardoning Engelbert still officers his cupboards orientally. Not semantic mediawiki as database schemas code after all of semantics synonyms are aggregated with. Chado is a relational database schema that underlies many GMOD installations It become capable of representing many assert the general classes of. Valid but i started an semantic mediawiki images could upvote you can put all database schemas. Terminology resources could be considered to now a has of linked data today can be shared, is intuitive for simple things, I Echo the character says about what help to get this commonwealth the other implication of this. Since this semantic mediawiki feature in small writes can be able to semantically structured data schemas code in. That is, properties can be defined using annotations, use your judgment! Purge cache control systems should be a semantic mediawiki extensions for? Another very useful if you could detect involuntary changes you can be completed to semantically represent the. In each example, Approved Revs, generally to write one row designate a time. Forms can be created and edited not murder by administrators, they need met be addressed before the pillar can be saved. The semantic web and databases in summary with comments that. MediaWiki API help MetaBattle Guild Wars 2 Builds. Tional database schema modeler designing a more flexible metadata model. As bail last month, especially when accessing data. And page forms can describe creating mappings between the way as links to semantically structured information back into the state some cases where it? OntologySummit2020Whence OntologPSMW.
    [Show full text]
  • Exploiting Linked Open Data for Enhancing Mediawiki-Based Semantic Organizational Knowledge Bases
    Exploiting Linked Open Data for Enhancing MediaWiki-based Semantic Organizational Knowledge Bases Matthias Frank1 and Stefan Zander2 1FZI Research Center for Information Technology, Haid-und-Neu-Str. 10-14, Karlsruhe, Germany 2Fachbereich Informatik, Hochschule Darmstadt, Darmstadt, Germany Keywords: Linked Open Data, Semantic Web, Wiki Systems, Knowledge Engineering. Abstract: One of the main driving forces for the integration of Semantic Media Wiki systems in corporate contexts is their query construction capabilities on top of organization-specific vocabularies together with the possi- bility to directly embed query results in wiki pages. However, exploiting knowledge from external sources like other organizational knowledge bases or Linked Open Data as well as sharing knowledge in a meaning- ful way is difficult due to the lack of a common and shared schema definition. In this paper, we introduce Linked Data Wiki (LD-Wiki), an approach that combines the power of Linked Open Vocabularies and Data with established organizational semantic wiki systems for knowledge management. It supports suggestions for annotations from Linked Open Data sources for organizational knowledge bases in order to enrich them with background information from Linked Open Data. The inclusion of potentially uncertain, incomplete, inconsistent or redundant Linked Open Data within an organization’s knowledge base poses the challenge of interpreting such data correctly within the respective context. In our approach, we evaluate data provenance information in order to
    [Show full text]
  • Linked Data Entity Summarization
    Linked Data Entity Summarization Zur Erlangung des akademischen Grades eines Doktors der Ingenieurwissenschaften (Dr.-Ing.) von der Fakultat¨ fur¨ Wirtschaftswissenschaften des Karlsruher Instituts fur¨ Technologie (KIT) genehmigte DISSERTATION von Dipl.-Inf. Univ. Andreas Thalhammer Tag der mundlichen¨ Prufung:¨ 8. Dezember 2016 Referent: Prof. Dr. Rudi Studer Korreferentin: Prof. Dr. Dunja Mladenic´ Karlsruhe 2016 This document was created on February 2, 2017 To my mother, Berta Thalhammer, who taught me to finish the things that I start. Abstract In recent years, the availability of structured data on the Web has grown and the Web has become more and more entity-focused. An entity can be a person, a book, a city, etc. In fact, all of these entities are connected in a large knowledge graph. In consequence, a lot of data is often available for single entities. However, in its complete form, the data is not always useful for humans unless it is presented in a concise manner. The task of entity summarization is to identify facts about entities that are particularly notable and worth to be shown to the user. A common usage scenario of entity summariza- tion is given by knowledge panels that are presented on search engine result pages. For producing summaries, search engine providers have a large pool of data readily available in the form of query logs, click paths, user profiles etc. This data is not openly available and emerging open approaches for producing summaries of entities can not rely on such background data. In addition, at the point of presentation, summaries are usually strongly tied to the user interfaces of the specific summary providers.
    [Show full text]
  • Creating Library Linked Data with Wikibase
    OCLCOCLC RESEARCHRESEARCH REPORTREPORT Creating Library Linked Data with Wikibase Lessons Learned from Project Passage Jean Godby, Karen Smith-Yoshimura, Bruce Washburn, Kalan Knudson Davis, Karen Detling, Christine Fernsebner Eslao, Steven Folsom, Xiaoli Li, Marc McGee, Karen Miller, Honor Moody, Craig Thomas, Holly Tomren Creating Library Linked Data with Wikibase: Lessons Learned from Project Passage Jean Godby OCLC Research Karen Smith-Yoshimura OCLC Research Bruce Washburn OCLC Research Kalan Knudson Davis University of Minnesota Karen Detling National Library of Medicine Christine Fernsebner Eslao Harvard University Steven Folsom Cornell University Xiaoli Li University of California, Davis Marc McGee Harvard University Karen Miller Northwestern University Honor Moody Harvard University Craig Thomas Harvard University Holly Tomren Temple University © 2019 OCLC Online Computer Library Center, Inc. This work is licensed under a Creative Commons Attribution 4.0 International License. http://creativecommons.org/licenses/by/4.0/ August 2019 OCLC Research Dublin, Ohio 43017 USA www.oclc.org ISBN: 978-1-55653-134-7 DOI: 10.25333/faq3-ax08 OCLC Control Number: 1110105996 ORCID iDs Jean Godby https://orcid.org/0000-0003-0085-2134 Karen Smith-Yoshimura https://orcid.org/0000-0002-8757-2962 Bruce Washburn http://orcid.org/0000-0003-4396-7345 Kalan Knudson Davis https://orcid.org/0000-0002-1032-6042 Christine Fernsebner Eslao https://orcid.org/0000-0002-7621-916X Steven Folsom https://orcid.org/0000-0003-3427-5769 Xiaoli Li https://orcid.org/0000-0001-5362-2151 Marc McGee https://orcid.org/0000-0001-5757-1494 Karen Miller https://orcid.org/0000-0002-9597-2376 Craig Thomas https://orcid.org/0000-0002-4027-7907 Holly Tomren https://orcid.org/0000-0002-6062-1138 Please direct correspondence to: OCLC Research [email protected] Suggested citation: Godby, Jean, Karen Smith-Yoshimura, Bruce Washburn, Kalan Knudson Davis, Karen Detling, Christine Fernsebner Eslao, Steven Folsom, Xiaoli Li, Marc McGee, Karen Miller, Honor Moody, Craig Thomas, and Holly Tomren.
    [Show full text]
  • An Experiment of Using the Wikibase Data Model for UNIMARC Data
    JLIS.it 9, 3 (September 2018) ISSN: 2038-1026 online Open access article licensed under CC-BY DOI: 10.4403/jlis.it-12458 New ways of creating and sharing bibliographic information: an experiment of using the Wikibase Data Model for UNIMARC data Giovanni Bergamin(a), Cristian Bacchi(b) a) Independent scholar, http://orcid.org/0000-0002-2912-5662 b) Independent scholar, http://orcid.org/0000-0001-6981-6188 __________ Contact: Giovanni Bergamin, [email protected]; Cristian Bacchi, [email protected] Received: 22 December 2017; Accepted: 1 January 2018; First Published: 15 September 2018 __________ ABSTRACT Starting from the consideration that UNIMARC (and in general the MARC) is in fact an ontology, this contribution proposes to make it explicit and to convert it – only at a syntactic level – Linked Data / RDF structures through the use of the Wikibase data model. The outcome could therefore become not only the publication of data as LOD, but also an environment for the production of bibliographic data that allows different ontological approaches. We illustrate the possibility to achieve a restructuring of the UNIMARC record into distinct items by data type (potentially referred also to the different FRBR entities), retaining the possibility to recover all the information of the original format. Then we highlight the Wikibase solutions that become exploitable for the MARC: “usable version” of the record, with explicitation of the encoded values, and definitions connected to the data in the same system; identification of univocal data with URIs, as required in the context of the semantic web; source of the data recorded for each field; statistics on the presence of fields and subfields; new storage format natively designed for collaborative editing; export of all elements in standard RDF; support of modification via open API.
    [Show full text]