Archiving Complex Digital Artworks

Total Page:16

File Type:pdf, Size:1020Kb

Archiving Complex Digital Artworks Journal of the Institute of Conservation, 2019 Vol. 42, No. 2, 94–113, https://doi.org/10.1080/19455224.2019.1604398 Dušan Barok , Julie Boschat Thorez , Annet Dekker, David Gauthier and Claudia Roeck Archiving complex digital artworks Abstract The transmission of the documentation of changes made in each presentation of an artwork and the motivation behind each display are of importance to the continued preservation, re-exhibition and future understanding of artworks. However, it is gener- ally acknowledged that existing digital archiving and documentation systems used by many museums are not suitable for complex digital artworks. Looking for an approach that can easily be adjusted, shared and adopted by others, this article focusses on open- source alternatives that also enable collaborative working to facilitate the sharing and changing of information. As an interdisciplinary team of conservators, researchers, artists and programmers, the authors set out to explore and compare the functionalities of two systems featuring version control: MediaWiki and Git. We reflect on their techni- cal details, virtues and shortcomings for archiving complex digital artworks, while looking at the potential they offer for collaborative workflows. Keywords art documentation; archiving; preservation; media conservation; complex digital art; version control Introduction In 2017 UBERMORGEN, a Swiss-Austrian-American artist duo, submitted a selection of their works to be taken into the collection of LIMA, a platform for media art in Amsterdam. UBERMORGEN’s main body of work consists of internet art, installation, video art, photography, software art and per- formance, and uses the convergence of digital media to produce and publish online and offline. Most of their early works were media hacking projects using low-tech tools to reach large audiences. As part of LIMA’s event series Cultural Matter—about the preservation, presentation and dis- tribution of digital art—researcher and artist Julie Boschat Thorez was asked to select one of the artworks as a starting point to discuss and con- textualise the art historical and technical importance of their works. She selected Chinese Gold (2006–ongoing), a project on the phenomenon of industrial-scale gold mining in the online video game World of Warcraft and operated from China. The project seemed to be the best candidate because some time had passed since it was initiated and it had been exhib- ited several times in multiple ways. It also represents the type of work UBER- MORGEN is known for, involving a lot of research from which different works develop. Hence, a careful consideration of the contexts and history was needed to understand the meaning of the work and any of its subsequent preservation measures. Some of the first questions focussed on the different elements of the work: what was the difference between the work and the documentation? What should be considered as research or contextual material and what should be seen as the actual work? Would it be necessary to make such distinctions? How to preserve and present a work that does not have a determined form? To provide some answers to those questions, many dis- cussions took place with the artists, who also gave access to their extensive (Received 16 January 2019; Accepted 3 April 2019) © 2019 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Archiving complex digital artworks 95 archive of the project, which complemented the information already gath- ered through online research. The first step was to list all the work’s elements and the multiple presenta- tions, both online and offline, and with this information in place, and together with the help of the artists, an extensive description of the project could be made as well as a choice about which elements to keep. Chinese Gold can 1 be described as a complex digital artwork —a heterogeneous assemblage, 1 In contemporary art conservation, from which the various elements can be combined, composed and compiled complex artworks have been con- in different ways, at different times and locations (online and offline) and by sidered to be installations and other 2 types of work with one or more of the different people. The next step involved the current research team thinking following elements: variable form (e.g. about all these types of documents, documentation and other materials that involving non-dedicated, replaceable were collected and how to archive them in a way that would do justice to the components), conceptual or otherwise ever-changing nature of the work, in such a way that it would also be useful for immaterial features crucial for re-exhibi- future preservation projects. tion, being process-based and being ’ open-ended—see, among others, Pip The transmission of an artist s research, the documentation of any Laurenson, ‘The Conservation and changes in the presentation of the work, and the motivation behind each Documentation of Video Art’,in display are of importance to the continued preservation, re-exhibition Modern Art: Who Cares? (Amsterdam: and future understanding of the work. However, it is generally acknowl- Foundation for the Conservation of edged that existing digital archiving and documentation systems used by Modern Art, 1999). many museums, such as The Museum System or Adlib, are not suitable 2 A similar way of working, and thus set for these particular kinds of artworks due to their rigidity. For example, of challenges, is inherent in many digital one cannot easily represent changes in the evolution of the work, nor artworks, including mouchette.org by 3 Martine Neddam, the practice of show the relations between its different elements. Even though standard Young-Hae Chang Heavy Industries or schema can be adjusted to specific needs, most applications are developed Lynn Hershman Leeson. For more infor- by commercial companies and this kind of flexibility comes at a price. More- mation see, for example, Annet Dekker, over, proprietary solutions usually have high licensing costs and lack a more Gabriella Giannachi, and Vivian van ‘ open model of governance. To move away from these systems and, more Saaze, Expanding Documentation, and Making the Most of the “Cracks in the importantly, looking for an approach that can easily be adjusted, shared Wall”’,inDocumenting Performance. and adopted by others, the research team focussed on open-source The Context and Processes of Digital alternatives that would also enable collaborative working to facilitate the Curation and Archiving, ed. Toni Sant (future) sharing and changing of information. With this choice, the team (London/New York: Bloomsbury, 2017), – also hoped to build alliances with existing communities of practice, 61 78; and Annet Dekker, Collecting and Conserving Net Art: Moving testing and using open-source alternative documentation systems. beyond Conventional Methods (Oxon: While the research presented here is not focussed on preservation of the Routledge, 2018). different elements of an artwork, it does provide a means to discuss alterna- 3 Cf. Annet Dekker and Patricia Falcão, tive ways of documenting the changes and different versions of an artwork ‘Interdisciplinary Discussions about the that take place in its biography and exhibition history, which are important Conservation of Software-Based Art. to consider for both future redisplays and preservation of the work.4 In this Community of Practice on Software- sense the research builds on discussions around the value of allographic Based Art’, PERICLES, March 2017, provenance and versioning in relation to complex artworks.5 As an interdis- http://www.tate.org.uk/download/file/ fid/108032 (accessed 23 September ciplinary team of conservators, researchers, artists and programmers, we 2018); Deena Engel and Glenn are determined to explore and compare the functionalities of two Wharton, ‘Managing Contemporary Art systems featuring version control and web interface: MediaWiki and Git, Documentation in Museums and and its associated repository manager GitLab.6 Special Collections’, Art Documen- Another reason for taking this direction relates to several early steps that tation: Journal of the Art Libraries Society of North America 36, no. 2 were done in archival and conservation practices to test the usefulness of (2017): 293–311. wiki-based platforms and version control systems for documenting art- 7 4 On an artwork’s biography see, for works. This research could be useful to further the discussion and example, Renée van de Vall et al., expand the working methods and possibilities. ‘Reflections on a Biographical Approach The study then focussed on how the version control elements of these to Contemporary Art Conservation’,in systems encourages collaboration between conservators, curators and/or ICOM-CC 16th Triennial Conference— artists in archiving complex digital artworks by reflecting on the technical Lisbon 2011, ed. J. Bridgland (Almada: Critério, 2011), 1–8. details of the different systems, their virtues and shortcomings. 5 For more information see, for ‘ Chinese Gold: the data instance, Renée van de Vall, Document- ing Dilemmas. On the Relevance of Ethi- UBERMORGEN were founded in 1995 by Lizvlx and Hans Bernhard. cally Ambiguous Cases’, Revista de Together they developed a series of landmark projects in digital art, Journal of the Institute of Conservation Vol. 42 No. 2 2019 96 Barok, Boschat Thorez, Dekker, Gauthier and Roeck História da Arte (2015): 7–17; and including Vote-Auction (2000), a media performance involving a false site Dekker, Collecting and Conserving Net where Americans could supposedly put their vote up for auction, and Art, 127–39. Google Will Eat Itself (GWEI) (2005, in collaboration with Alessandro Ludo- ’ 6 This research began with the ‘Version- vico and Paolo Cirio), a project that proposed using Google s own advertis- ing the Networked Archive’ workshop ing revenue to buy up every single share in the company.
Recommended publications
  • GND Meets Wikibase 2 | 18 | Gndxwikibase | CLARIN | 03.09.2020
    1 | 18 | GNDxWikibase | CLARIN | 03.09.2020 Barbara Fischer | Sarah Hartmann GND meets Wikibase 2 | 18 | GNDxWikibase | CLARIN | 03.09.2020 GND meets Wikibase - We want to make our free structured authority data easier accessible and interoperable - We are testing Wikibase on its functionality as toolkit for regulations Blog post 3 | 18 | GNDxWikibase | CLARIN | 03.09.2020 Gemeinsame Normdatei (GND) – authority file used by CHI (mainly libraries) in D-A-CH - 16 mio identifiers referring to persons, (names of persons), corporate bodies, conferences, geographic names, subject headings, works – run cooperativley by GND agencies - active user: ~1.000 institutions – Open data (CC0), APIs and documentation – opening up to GLAM, science and others - the handy tool of librarians has to evolve into a cross domain tool: organization; data model; infrastructure & community building 4 | 18 | GNDxWikibase | CLARIN | 03.09.2020 On Wikibase - Open source on behalf of the Wikimedia Foundation - Developed by staff of Wikimedia Deutschland e.V. - Based on Mediawiki - An extension basically serving Wikidata needs - Yet on the very start to become a standardized product 5 | 18 | GNDxWikibase | CLARIN | 03.09.2020 GND meets Wikibase The project is an evaluation in two steps - Part one: proof of concept - Part two: testing the capacity Blog post 6 | 18 | GNDxWikibase | CLARIN | 03.09.2020 Proof of concept – Questions in 2019 – Is Wikibase convenient for the collaborative production and maintainance of authority data? - Both actual GND and „GND2.0“ – Will Wikibase
    [Show full text]
  • Creating Library Linked Data with Wikibase: Lessons Learned from Project Passage
    OCLCOCLC RESEARCH RESEARCH REPORT REPORT Creating Library Linked Data with Wikibase Lessons Learned from Project Passage Jean Godby, Karen Smith-Yoshimura, Bruce Washburn, Kalan Knudson Davis, Karen Detling, Christine Fernsebner Eslao, Steven Folsom, Xiaoli Li, Marc McGee, Karen Miller, Honor Moody, Craig Thomas, Holly Tomren Creating Library Linked Data with Wikibase: Lessons Learned from Project Passage Jean Godby OCLC Research Karen Smith-Yoshimura OCLC Research Bruce Washburn OCLC Research Kalan Knudson Davis University of Minnesota Karen Detling National Library of Medicine Christine Fernsebner Eslao Harvard University Steven Folsom Cornell University Xiaoli Li University of California, Davis Marc McGee Harvard University Karen Miller Northwestern University Honor Moody Harvard University Craig Thomas Harvard University Holly Tomren Temple University © 2019 OCLC Online Computer Library Center, Inc. This work is licensed under a Creative Commons Attribution 4.0 International License. http://creativecommons.org/licenses/by/4.0/ August 2019 OCLC Research Dublin, Ohio 43017 USA www.oclc.org ISBN: 978-1-55653-135-4 DOI: 10.25333/faq3-ax08 OCLC Control Number: 1110105996 ORCID iDs Jean Godby https://orcid.org/0000-0003-0085-2134 Karen Smith-Yoshimura https://orcid.org/0000-0002-8757-2962 Bruce Washburn http://orcid.org/0000-0003-4396-7345 Kalan Knudson Davis https://orcid.org/0000-0002-1032-6042 Christine Fernsebner Eslao https://orcid.org/0000-0002-7621-916X Steven Folsom https://orcid.org/0000-0003-3427-5769 Xiaoli Li https://orcid.org/0000-0001-5362-2151 Marc McGee https://orcid.org/0000-0001-5757-1494 Karen Miller https://orcid.org/0000-0002-9597-2376 Craig Thomas https://orcid.org/0000-0002-4027-7907 Holly Tomren https://orcid.org/0000-0002-6062-1138 Please direct correspondence to: OCLC Research [email protected] Suggested citation: Godby, Jean, Karen Smith-Yoshimura, Bruce Washburn, Kalan Knudson Davis, Karen Detling, Christine Fernsebner Eslao, Steven Folsom, Xiaoli Li, Marc McGee, Karen Miller, Honor Moody, Craig Thomas, and Holly Tomren.
    [Show full text]
  • Working-With-Mediawiki-Yaron-Koren.Pdf
    Working with MediaWiki Yaron Koren 2 Working with MediaWiki by Yaron Koren Published by WikiWorks Press. Copyright ©2012 by Yaron Koren, except where otherwise noted. Chapter 17, “Semantic Forms”, includes significant content from the Semantic Forms homepage (https://www. mediawiki.org/wiki/Extension:Semantic_Forms), available under the Creative Commons BY-SA 3.0 license. All rights reserved. Library of Congress Control Number: 2012952489 ISBN: 978-0615720302 First edition, second printing: 2014 Ordering information for this book can be found at: http://workingwithmediawiki.com All printing of this book is handled by CreateSpace (https://createspace.com), a subsidiary of Amazon.com. Cover design by Grace Cheong (http://gracecheong.com). Contents 1 About MediaWiki 1 History of MediaWiki . 1 Community and support . 3 Available hosts . 4 2 Setting up MediaWiki 7 The MediaWiki environment . 7 Download . 7 Installing . 8 Setting the logo . 8 Changing the URL structure . 9 Updating MediaWiki . 9 3 Editing in MediaWiki 11 Tabs........................................................... 11 Creating and editing pages . 12 Page history . 14 Page diffs . 15 Undoing . 16 Blocking and rollbacks . 17 Deleting revisions . 17 Moving pages . 18 Deleting pages . 19 Edit conflicts . 20 4 MediaWiki syntax 21 Wikitext . 21 Interwiki links . 26 Including HTML . 26 Templates . 27 3 4 Contents Parser and tag functions . 30 Variables . 33 Behavior switches . 33 5 Content organization 35 Categories . 35 Namespaces . 38 Redirects . 41 Subpages and super-pages . 42 Special pages . 43 6 Communication 45 Talk pages . 45 LiquidThreads . 47 Echo & Flow . 48 Handling reader comments . 48 Chat........................................................... 49 Emailing users . 49 7 Images and files 51 Uploading . 51 Displaying images . 55 Image galleries .
    [Show full text]
  • Analyzing Wikidata Transclusion on English Wikipedia
    Analyzing Wikidata Transclusion on English Wikipedia Isaac Johnson Wikimedia Foundation [email protected] Abstract. Wikidata is steadily becoming more central to Wikipedia, not just in maintaining interlanguage links, but in automated popula- tion of content within the articles themselves. It is not well understood, however, how widespread this transclusion of Wikidata content is within Wikipedia. This work presents a taxonomy of Wikidata transclusion from the perspective of its potential impact on readers and an associated in- depth analysis of Wikidata transclusion within English Wikipedia. It finds that Wikidata transclusion that impacts the content of Wikipedia articles happens at a much lower rate (5%) than previous statistics had suggested (61%). Recommendations are made for how to adjust current tracking mechanisms of Wikidata transclusion to better support metrics and patrollers in their evaluation of Wikidata transclusion. Keywords: Wikidata · Wikipedia · Patrolling 1 Introduction Wikidata is steadily becoming more central to Wikipedia, not just in maintaining interlanguage links, but in automated population of content within the articles themselves. This transclusion of Wikidata content within Wikipedia can help to reduce maintenance of certain facts and links by shifting the burden to main- tain up-to-date, referenced material from each individual Wikipedia to a single repository, Wikidata. Current best estimates suggest that, as of August 2020, 62% of Wikipedia ar- ticles across all languages transclude Wikidata content. This statistic ranges from Arabic Wikipedia (arwiki) and Basque Wikipedia (euwiki), where nearly 100% of articles transclude Wikidata content in some form, to Japanese Wikipedia (jawiki) at 38% of articles and many small wikis that lack any Wikidata tran- sclusion.
    [Show full text]
  • Extending Semantic Mediawiki for Interoperable Biomedical Data
    Lampa et al. Journal of Biomedical Semantics (2017) 8:35 DOI 10.1186/s13326-017-0136-y SOFTWARE Open Access RDFIO: extending Semantic MediaWiki for interoperable biomedical data management Samuel Lampa1* , Egon Willighagen2, Pekka Kohonen3,4, Ali King5, Denny Vrandeciˇ c´6, Roland Grafström3,4 and Ola Spjuth1 Abstract Background: Biological sciences are characterised not only by an increasing amount but also the extreme complexity of its data. This stresses the need for efficient ways of integrating these data in a coherent description of biological systems. In many cases, biological data needs organization before integration. This is not seldom a collaborative effort, and it is thus important that tools for data integration support a collaborative way of working. Wiki systems with support for structured semantic data authoring, such as Semantic MediaWiki, provide a powerful solution for collaborative editing of data combined with machine-readability, so that data can be handled in an automated fashion in any downstream analyses. Semantic MediaWiki lacks a built-in data import function though, which hinders efficient round-tripping of data between interoperable Semantic Web formats such as RDF and the internal wiki format. Results: To solve this deficiency, the RDFIO suite of tools is presented, which supports importing of RDF data into Semantic MediaWiki, with metadata needed to export it again in the same RDF format, or ontology. Additionally, the new functionality enables mash-ups of automated data imports combined with manually created data presentations. The application of the suite of tools is demonstrated by importing drug discovery related data about rare diseases from Orphanet and acid dissociation constants from Wikidata.
    [Show full text]
  • Using the W3C Generating RDF from Tabular Data on the Web Recommendation to Manage Small Wikidata Datasets
    Using the W3C Generating RDF from Tabular Data on the Web Recommendation to manage small Wikidata datasets Steven J. Baskaufa,* and Jessica K. Baskaufb a Jean and Alexander Heard Libraries, Vanderbilt University, Nashville, Tennessee, USA E-mail: [email protected], https://orcid.org/0000-0003-4365-3135 b Carleton ColleGe, Northfield, Minnesota, USA1 https://orcid.org/0000-0002-1772-1045 Editor(s): Jose Emilio Labra Gayo, University of Oviedo, Spain; Anastasia Dimou, IDLab, Ghent University, Belgium; Katherine Thornton, Yale University Library, USA; Anisa Rula, University of Milano-Bicocca, Italy and University of Bonn, Germany Solicited reviews: Jakub Klimek, Charles University, Czech Republic; John Samuel, CPE Lyon, France; Andra Waagmeester, Maastricht University, Netherlands; Tom Baker, Sungkyunkwan University, South Korea; Dimitris Kontokostas, Universität Leipzig, Germany Abstract. The W3C Generating RDF from Tabular Data on the Web Recommendation provides a mechanism for mapping CSV-formatted data to any RDF graph model. Since the Wikibase data model used by Wikidata can be expressed as RDF, this Recommendation can be used to document tabular snapshots of parts of the Wikidata knowledge graph in a simple form that is easy for humans and applications to read. Those snapshots can be used to document how subgraphs of Wikidata have changed over time and can be compared with the current state of Wikidata using its Query Service to detect vandalism and value added through community contributions. Keywords: CSV file, Wikibase model, SPARQL 1. Introduction for using Wikidata as a place to expose and manage data about items of their concern, such as collections Because of its availability and ease of use, Wikidata records, authors, and authority files.
    [Show full text]
  • Archiving Complex Digital Artworks
    UvA-DARE (Digital Academic Repository) Archiving complex digital artworks Barok, D.; Boschat Thorez, J.; Dekker, A.; Gauthier, D.; Roeck, C. DOI 10.1080/19455224.2019.1604398 Publication date 2019 Document Version Final published version Published in Journal of the Institute of Conservation License CC BY Link to publication Citation for published version (APA): Barok, D., Boschat Thorez, J., Dekker, A., Gauthier, D., & Roeck, C. (2019). Archiving complex digital artworks. Journal of the Institute of Conservation, 42(2), 94-113. https://doi.org/10.1080/19455224.2019.1604398 General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. UvA-DARE is a service provided by the library of the University of Amsterdam (https://dare.uva.nl) Download date:29 Sep 2021 Journal of the Institute of Conservation, 2019 Vol. 42, No. 2, 94–113, https://doi.org/10.1080/19455224.2019.1604398 Dušan Barok , Julie Boschat Thorez , Annet Dekker, David Gauthier and Claudia Roeck Archiving complex digital artworks Abstract The transmission of the documentation of changes made in each presentation of an artwork and the motivation behind each display are of importance to the continued preservation, re-exhibition and future understanding of artworks.
    [Show full text]
  • Semantic Mediawiki Database Schema
    Semantic Mediawiki Database Schema Matthaeus is syzygial and idolatrise impurely while ineluctable Webb stumble and creams. Cernuous and aloetic Templeton fidges so groundlessly that Francois interfused his tela. Elliptic and unpardoning Engelbert still officers his cupboards orientally. Not semantic mediawiki as database schemas code after all of semantics synonyms are aggregated with. Chado is a relational database schema that underlies many GMOD installations It become capable of representing many assert the general classes of. Valid but i started an semantic mediawiki images could upvote you can put all database schemas. Terminology resources could be considered to now a has of linked data today can be shared, is intuitive for simple things, I Echo the character says about what help to get this commonwealth the other implication of this. Since this semantic mediawiki feature in small writes can be able to semantically structured data schemas code in. That is, properties can be defined using annotations, use your judgment! Purge cache control systems should be a semantic mediawiki extensions for? Another very useful if you could detect involuntary changes you can be completed to semantically represent the. In each example, Approved Revs, generally to write one row designate a time. Forms can be created and edited not murder by administrators, they need met be addressed before the pillar can be saved. The semantic web and databases in summary with comments that. MediaWiki API help MetaBattle Guild Wars 2 Builds. Tional database schema modeler designing a more flexible metadata model. As bail last month, especially when accessing data. And page forms can describe creating mappings between the way as links to semantically structured information back into the state some cases where it? OntologySummit2020Whence OntologPSMW.
    [Show full text]
  • Exploiting Linked Open Data for Enhancing Mediawiki-Based Semantic Organizational Knowledge Bases
    Exploiting Linked Open Data for Enhancing MediaWiki-based Semantic Organizational Knowledge Bases Matthias Frank1 and Stefan Zander2 1FZI Research Center for Information Technology, Haid-und-Neu-Str. 10-14, Karlsruhe, Germany 2Fachbereich Informatik, Hochschule Darmstadt, Darmstadt, Germany Keywords: Linked Open Data, Semantic Web, Wiki Systems, Knowledge Engineering. Abstract: One of the main driving forces for the integration of Semantic Media Wiki systems in corporate contexts is their query construction capabilities on top of organization-specific vocabularies together with the possi- bility to directly embed query results in wiki pages. However, exploiting knowledge from external sources like other organizational knowledge bases or Linked Open Data as well as sharing knowledge in a meaning- ful way is difficult due to the lack of a common and shared schema definition. In this paper, we introduce Linked Data Wiki (LD-Wiki), an approach that combines the power of Linked Open Vocabularies and Data with established organizational semantic wiki systems for knowledge management. It supports suggestions for annotations from Linked Open Data sources for organizational knowledge bases in order to enrich them with background information from Linked Open Data. The inclusion of potentially uncertain, incomplete, inconsistent or redundant Linked Open Data within an organization’s knowledge base poses the challenge of interpreting such data correctly within the respective context. In our approach, we evaluate data provenance information in order to
    [Show full text]
  • Linked Data Entity Summarization
    Linked Data Entity Summarization Zur Erlangung des akademischen Grades eines Doktors der Ingenieurwissenschaften (Dr.-Ing.) von der Fakultat¨ fur¨ Wirtschaftswissenschaften des Karlsruher Instituts fur¨ Technologie (KIT) genehmigte DISSERTATION von Dipl.-Inf. Univ. Andreas Thalhammer Tag der mundlichen¨ Prufung:¨ 8. Dezember 2016 Referent: Prof. Dr. Rudi Studer Korreferentin: Prof. Dr. Dunja Mladenic´ Karlsruhe 2016 This document was created on February 2, 2017 To my mother, Berta Thalhammer, who taught me to finish the things that I start. Abstract In recent years, the availability of structured data on the Web has grown and the Web has become more and more entity-focused. An entity can be a person, a book, a city, etc. In fact, all of these entities are connected in a large knowledge graph. In consequence, a lot of data is often available for single entities. However, in its complete form, the data is not always useful for humans unless it is presented in a concise manner. The task of entity summarization is to identify facts about entities that are particularly notable and worth to be shown to the user. A common usage scenario of entity summariza- tion is given by knowledge panels that are presented on search engine result pages. For producing summaries, search engine providers have a large pool of data readily available in the form of query logs, click paths, user profiles etc. This data is not openly available and emerging open approaches for producing summaries of entities can not rely on such background data. In addition, at the point of presentation, summaries are usually strongly tied to the user interfaces of the specific summary providers.
    [Show full text]
  • Creating Library Linked Data with Wikibase
    OCLCOCLC RESEARCHRESEARCH REPORTREPORT Creating Library Linked Data with Wikibase Lessons Learned from Project Passage Jean Godby, Karen Smith-Yoshimura, Bruce Washburn, Kalan Knudson Davis, Karen Detling, Christine Fernsebner Eslao, Steven Folsom, Xiaoli Li, Marc McGee, Karen Miller, Honor Moody, Craig Thomas, Holly Tomren Creating Library Linked Data with Wikibase: Lessons Learned from Project Passage Jean Godby OCLC Research Karen Smith-Yoshimura OCLC Research Bruce Washburn OCLC Research Kalan Knudson Davis University of Minnesota Karen Detling National Library of Medicine Christine Fernsebner Eslao Harvard University Steven Folsom Cornell University Xiaoli Li University of California, Davis Marc McGee Harvard University Karen Miller Northwestern University Honor Moody Harvard University Craig Thomas Harvard University Holly Tomren Temple University © 2019 OCLC Online Computer Library Center, Inc. This work is licensed under a Creative Commons Attribution 4.0 International License. http://creativecommons.org/licenses/by/4.0/ August 2019 OCLC Research Dublin, Ohio 43017 USA www.oclc.org ISBN: 978-1-55653-134-7 DOI: 10.25333/faq3-ax08 OCLC Control Number: 1110105996 ORCID iDs Jean Godby https://orcid.org/0000-0003-0085-2134 Karen Smith-Yoshimura https://orcid.org/0000-0002-8757-2962 Bruce Washburn http://orcid.org/0000-0003-4396-7345 Kalan Knudson Davis https://orcid.org/0000-0002-1032-6042 Christine Fernsebner Eslao https://orcid.org/0000-0002-7621-916X Steven Folsom https://orcid.org/0000-0003-3427-5769 Xiaoli Li https://orcid.org/0000-0001-5362-2151 Marc McGee https://orcid.org/0000-0001-5757-1494 Karen Miller https://orcid.org/0000-0002-9597-2376 Craig Thomas https://orcid.org/0000-0002-4027-7907 Holly Tomren https://orcid.org/0000-0002-6062-1138 Please direct correspondence to: OCLC Research [email protected] Suggested citation: Godby, Jean, Karen Smith-Yoshimura, Bruce Washburn, Kalan Knudson Davis, Karen Detling, Christine Fernsebner Eslao, Steven Folsom, Xiaoli Li, Marc McGee, Karen Miller, Honor Moody, Craig Thomas, and Holly Tomren.
    [Show full text]
  • An Experiment of Using the Wikibase Data Model for UNIMARC Data
    JLIS.it 9, 3 (September 2018) ISSN: 2038-1026 online Open access article licensed under CC-BY DOI: 10.4403/jlis.it-12458 New ways of creating and sharing bibliographic information: an experiment of using the Wikibase Data Model for UNIMARC data Giovanni Bergamin(a), Cristian Bacchi(b) a) Independent scholar, http://orcid.org/0000-0002-2912-5662 b) Independent scholar, http://orcid.org/0000-0001-6981-6188 __________ Contact: Giovanni Bergamin, [email protected]; Cristian Bacchi, [email protected] Received: 22 December 2017; Accepted: 1 January 2018; First Published: 15 September 2018 __________ ABSTRACT Starting from the consideration that UNIMARC (and in general the MARC) is in fact an ontology, this contribution proposes to make it explicit and to convert it – only at a syntactic level – Linked Data / RDF structures through the use of the Wikibase data model. The outcome could therefore become not only the publication of data as LOD, but also an environment for the production of bibliographic data that allows different ontological approaches. We illustrate the possibility to achieve a restructuring of the UNIMARC record into distinct items by data type (potentially referred also to the different FRBR entities), retaining the possibility to recover all the information of the original format. Then we highlight the Wikibase solutions that become exploitable for the MARC: “usable version” of the record, with explicitation of the encoded values, and definitions connected to the data in the same system; identification of univocal data with URIs, as required in the context of the semantic web; source of the data recorded for each field; statistics on the presence of fields and subfields; new storage format natively designed for collaborative editing; export of all elements in standard RDF; support of modification via open API.
    [Show full text]