The Open Annotation Collaboration (OAC) Model

Total Page:16

File Type:pdf, Size:1020Kb

The Open Annotation Collaboration (OAC) Model The Open Annotation Collaboration (OAC) Model Bernhard Haslhofer∗, Rainer Simony, Robert Sandersonz and Herbert van de Sompelz ∗Cornell Information Science, Ithaca, USA [email protected] yAustrian Institute of Technology, Vienna, Austria [email protected] zLos Alamos National Laboratory, Los Alamos, USA frsanderson,[email protected] Abstract—Annotations allow users to associate additional in- Twitter tweets as annotations on Web resources. Therefore, in a formation with existing resources. Using proprietary and closed generic and Web-centric conception, we regard an annotation systems on the Web, users are already able to annotate mul- as association created between one body resource and other timedia resources such as images, audio and video. So far, however, this information is almost always kept locked up and target resources, where the body must be somehow about the inaccessible to the Web of Data. We believe that an important target. step to take is the integration of multimedia annotations and the Annotea [3] already defines a specification for publishing Linked Data principles. This should allow clients to easily publish annotations but has several shortcomings: (i) it was designed and consume, thus exchange annotations about resources via for the annotation of Web pages and provides only limited common Web standards. We first present the current status of the Open Annotation Collaboration, an international initiative that means to address segments in multimedia objects, (ii) if clients is currently working on annotation interoperability specifications want to access annotations they need to be aware of the based on best practices from the Linked Data effort. Then we Annotea-specific protocol, and (iii) Annotea annotations do present two use cases and early prototypes that make use of not take into account that Web resources are very likely to the proposed annotation model and present lessons learned and have different states over time. discuss yet open technical issues. Throughout the years several Annotea extensions have been developed to deal with these and other shortcomings: Koivun- I. INTRODUCTION nen [4] introduced additional types of annotations, such as Large scale media portals such as Youtube and Flickr allow Bookmark and Topic. Schroeter and Hunter [5] proposed to users to attach information to multimedia objects by means of express segments in media-objects by using context resources annotations. Web portals hosting multi-lingual collections of in combination with formalized or standardized descriptions to millions of digitized items such as Europeana are currently represent the context, such as SVG or complex datatypes taken investigating how to integrate the knowledge of end users from the MPEG-7 standard. Based on that work, Haslhofer et with existing digital curation processes. Annotations are also al. [6] introduce the notion of annotation profiles as containers becoming an increasingly important component in the cyber- for content- and annotation-type specific Annotea extensions infrastructure of many scholarly disciplines. and suggested that annotations should be dereferencable re- A Web-based annotation model should fulfill several re- sources on the Web, which follow the Linked Data principles. quirements. In the age of video blogging and real-time sharing However, these extensions were developed separate from each of geo-located images, the notion of solely textual annotations other and inherit the above-mentioned Annotea shortcomings. has become obsolete. Therefore, multimedia Web resources In this paper, we describe how the Open Annotations arXiv:1106.5178v1 [cs.DL] 25 Jun 2011 should be annotatable and also be able to be annotated onto Collaboration (OAC), an effort aimed at establishing anno- other resources. Users often discuss multiple segments of a tation interoperability, tackles these issues. We describe two resource, or multiple resources, in a single annotation and annotation use cases — image and historic map annotation – thus the model should support multiple targets. An annotation for which we have implemented the OAC model and report framework should also follow the Linked Open Data guide- on lessons learned and open issues. We also briefly summarize lines to promote annotation sharing between systems. In order related work in this area and give outlook on our future work. to avoid inaccurate or incorrect annotations, it must take the ephemeral nature of Web resources into account. II. THE OPEN ANNOTATION COLLABORATION Annotations on the Web have many facets: a simple example The Open Annotation Collaboration (OAC) is an inter- could be a textual note or a tag (cf., [1]) annotating an image or national group with the aim of providing a Web-centric, video. Things become more complex when a particular para- interoperable annotation environment that facilitates cross- graph in an HTML document annotates a segment (cf., [2]) boundary annotations, allowing multiple servers, clients and in an online video or when someone draws polygon shapes overlay services to create, discover and make use of the on tiled high-resolution image sets. If we further extend the valuable information contained in annotations. To this end, annotation concept, we could easily regard a large portion of a Linked Data based data model has been adopted. A. Open Annotation Data Model string The OAC data model tries to pull together various exten- dcterms:creator sions of Annotea into a cohesive whole. The Web architecture foaf:name dcterms:created and Linked Data guidelines are foundational principles, result- U-1 ing in a specification that can be applied to annotate any set foaf:mbox datetime of Web resources. At the time of this writing, the specifica- A-1 tion, which is available at http://www.openannotation.org/spec/ alpha3/, is still under development. Following its predecessors, string oac:hasBody oac:hasTarget the OAC model, shown in Figure 1, has three primary classes of resources. In all cases below, the oac namespace prefix CT-1 expands to http://www.openannotation.org/ns/. B-1 • The oac:Body of the annotation (node B-1). This resource is the comment, metadata or other information oac:constrainedBy oac:constrains cnt:characterEncoding that is created about another resource. The Body can cnt:chars be any Web resource, of any media format, available at C-1 T-1 string string any URI. The model allows for exactly one Body per Annotation. • The oac:Target of the annotation (node T-1). This is Fig. 1. OAC Baseline Data Model. the resource that the Body is about. Like the Body, it can be any URI identified resource. The model allows for one or more Targets per Annotation. as it was deemed artificial, and the complexities regarding • The oac:Annotation (node A-1). This resource handling the difference between information resource and non- stands for a document identified by an HTTP URI that information resource at the protocol level were considered a describes at least the Body and Target resources involved hindrance to potential adoption. in the annotation as well as any additional properties C. Addressing Media Segments and relationships (e.g., dcterms:creator). Dereferencing an annotation’s HTTP URI returns a serialization in a Many annotations concern parts, or segments, of resources permissible RDF format. rather than the entirety of the resource. While simple segments If the Body of an annotation is identified by a dereferencable of resources can be identified and referenced directly using the HTTP URI, as it is the case in Twitter, various blogging emerging W3C Media Fragment specification [9] or media- platforms, or Google Docs, it can easily be referenced from type-specific fragment identifiers as defined in RFCs, there an annotation. If a client cannot create URIs for an annotation are many use cases for segments that currently cannot be Body, for instance because it is an offline client, they can identified using this proposal. One example is the annotation assign a unique non-resolvable URI (called a URN) as the of historic maps, where users need to draw polygon shapes identifier for the Body node. This approach can still be around the geographic areas they want to address with their reconciled with the Linked Data principles as servers that notes. Sanderson et al. [10] describe another use case where publish such annotations can assign HTTP URIs they control annotators express the relationships between images, texts and to the Bodies, and express equivalence between the HTTP URI other resources in medieval manuscripts by means of line and the URN. segments. For these and other use cases, which require the The OAC model also allows to include textual information expression of complex media segment information, the current directly in the annotation document by adding the repre- W3C Media Fragments specification is insufficient. sentation of a resource as plain text to the Body via the For this reason, OAC introduces the so-called cnt:chars property and defining the character encoding oac:ConstraintTarget node (CT-1), which constrains using cnt:characterEncoding [7]. the annotation target to a specific segment that is further described by an oac:Constraint (C-1) node. The B. OAC and Linked Data description of the constraint depends on the application Several Linked Data principles have influenced the approach scenario and on the (media) type of the annotated target taken by Open Annotation Collaboration. The promotion of resource. For example, an SVG path could be used to describe a publish/discover approach for handling Annotations as op- a region within an image. posed to the protocol-oriented approach taken by Annotea stands out. But also the emphasis on the use (wherever D. Robust Annotations in Time possible) of HTTP URIs for resources involved in Annotations It must be stressed that different agents may create the reflects a Linked Data philosophy. Earlier versions of the Annotation, Body and Target at different times. For example, data model considered an Annotation to be an OAI-ORE Alice might create an annotation saying that Bob’s YouTube Aggregation [8] and hence a conceptual non-information re- video annotates Carol’s Flickr photo.
Recommended publications
  • Xincluding Plain-Text Fragments for Symmetry and Profit Piotr Bański, University of Warsaw, [email protected]
    XIncluding plain-text fragments for symmetry and profit Piotr Bański, University of Warsaw, [email protected] <xi:include href=”resource.xml” parse=”text|xml” Summary xpointer=”fragment_identifier” /> XML for the annotation of images, audio or video is by necessity of the stand-off kind: the annotations are virtually anchored to the parse=“xml” parse=“text” binary stream (typically by x,y coordinates or time offsets). Work on language resource annotation has extended this approach to plain text, where annotations are anchored by character offsets. This is referred to as extreme stand-off markup. No generic XML tools or standards support it, but... they could, fairly easily. inclusion of inclusion of inclusion of inclusion of Rationale the entire the entire a fragment a fragment ISO Language Annotation Framework (e.g. Ide & Romary, 2007) re- XML text of an XML of a text commends that data be kept as read-only plain text, with one or mo- resource resource resource resource re layers of XML annotation identifying individual spans of charac- ters and adding metadata to them (identifying e.g. segments of va- rious levels of granularity, parts of speech, syntactic or discourse @xpointer present functions, etc.) no @xpointer However, the only way to handle this (as evidenced by the American National Corpus), is to use in-house tools working on correspon- dence semantics for hyperlinks (see Bański, 2010, for terminology What makes it impossible right now? and more remarks). 1. XInclude Ban: XInclude Rec, 3.1: “The xpointer attribute must not be present when parse="text".” But character streams can do better! Because text is what XML is built out of and what it contains, instead of correspondence 2.
    [Show full text]
  • W3C Working Draft: Scalable Vector
    next contents properties index Scalable Vector Graphics (SVG) 1.0 Specification W3C Working Draft 03 March 2000 This version: http://www.w3.org/TR/2000/03/WD-SVG-20000303 (Available as: PDF, zip archive of HTML) Previous public version: http://www.w3.org/TR/1999/WD-SVG-19991203/ Latest public version: http://www.w3.org/TR/SVG/ Editor: Jon Ferraiolo <[email protected]> Authors: See author list Copyright ©1998, 1999, 2000 W3C® (MIT, INRIA, Keio), All Rights Reserved. W3C liability, trademark, document use and software licensing rules apply. Abstract This specification defines the features and syntax for Scalable Vector Graphics (SVG), a language for describing two-dimensional vector and mixed vector/raster graphics in XML. Status of this document This document is a public review draft version of the SVG specification. This working draft attempts to address review comments that were received during the initial Last Call period, which started 12 August 1999, and also incorporates other modifications resulting from continuing collaboration with other working groups and continuing work within the SVG working group. With the publication of this draft, the SVG specification enters a second "Last Call". The second Last Call period will end on 31 March, 2000. The SVG specification is going through a second Last Call review process to provide the public and other working groups an opportunity to review the changes to the specification since the initial Last Call period. A complete list of all changes since the initial Last Call version of the specification is available in Appendix L: Change History. Last call comments should be sent to [email protected].
    [Show full text]
  • Understanding JSON Schema Release 2020-12
    Understanding JSON Schema Release 2020-12 Michael Droettboom, et al Space Telescope Science Institute Sep 14, 2021 Contents 1 Conventions used in this book3 1.1 Language-specific notes.........................................3 1.2 Draft-specific notes............................................4 1.3 Examples.................................................4 2 What is a schema? 7 3 The basics 11 3.1 Hello, World!............................................... 11 3.2 The type keyword............................................ 12 3.3 Declaring a JSON Schema........................................ 13 3.4 Declaring a unique identifier....................................... 13 4 JSON Schema Reference 15 4.1 Type-specific keywords......................................... 15 4.2 string................................................... 17 4.2.1 Length.............................................. 19 4.2.2 Regular Expressions...................................... 19 4.2.3 Format.............................................. 20 4.3 Regular Expressions........................................... 22 4.3.1 Example............................................. 23 4.4 Numeric types.............................................. 23 4.4.1 integer.............................................. 24 4.4.2 number............................................. 25 4.4.3 Multiples............................................ 26 4.4.4 Range.............................................. 26 4.5 object................................................... 29 4.5.1 Properties...........................................
    [Show full text]
  • Iso/Iec 21000-17:2006(E)
    This is a preview - click here to buy the full publication INTERNATIONAL ISO/IEC STANDARD 21000-17 First edition 2006-09-15 Information technology — Multimedia framework (MPEG-21) — Part 17: Fragment Identification of MPEG Resources Technologies de l'information — Cadre multimédia (MPEG-21) — Partie 17: Identification de fragments de ressources MPEG Reference number ISO/IEC 21000-17:2006(E) © ISO/IEC 2006 ISO/IEC 21000-17:2006(E) This is a preview - click here to buy the full publication PDF disclaimer This PDF file may contain embedded typefaces. In accordance with Adobe's licensing policy, this file may be printed or viewed but shall not be edited unless the typefaces which are embedded are licensed to and installed on the computer performing the editing. In downloading this file, parties accept therein the responsibility of not infringing Adobe's licensing policy. The ISO Central Secretariat accepts no liability in this area. Adobe is a trademark of Adobe Systems Incorporated. Details of the software products used to create this PDF file can be found in the General Info relative to the file; the PDF-creation parameters were optimized for printing. Every care has been taken to ensure that the file is suitable for use by ISO member bodies. In the unlikely event that a problem relating to it is found, please inform the Central Secretariat at the address given below. © ISO/IEC 2006 All rights reserved. Unless otherwise specified, no part of this publication may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying and microfilm, without permission in writing from either ISO at the address below or ISO's member body in the country of the requester.
    [Show full text]
  • 3 Normative References
    3 Normative References NR1 W3C/TR ldp Linked Data Platform1.0. W3C Recommendation, 26 February 2015. http://www.w3.org/TR/ldp/ NR2 W3C/TR owl2-syntax OWL 2 Web Ontology Language: Structural Specification and Functional-Style Syntax(Second Edition). W3C Recommendation, 11 December 2012. http://www.w3.org/TR/owl2-syntax/ NR3 ISO/IEC 14977:1996 Information technology – Syntactic metalanguage – Extended BNF. NR4 W3C/TR xml Extensible Markup Language (XML)1.0 (Fifth Edition). W3C Recommendation, 26 November 2008. http://www.w3.org/TR/xml/ NR5 W3C/TR owl2-primer OWL 2 Web Ontology Language: Primer(Second Edition). W3C Recommendation, 11 December 2012. http://www.w3.org/TR/owl2-primer/ NR6 W3C/TR owl2-profiles OWL 2 Web Ontology Language: Profiles(Second Edition). W3C Recommendation, 11 Decem- ber 2012. http://www.w3.org/TR/owl2-profiles/ NR7 ISO/IEC 24707:2007 Information technology – Common Logic (CL): a framework for a family of logic-based languages. : NR8 OMG Document ptc/2013-09-05 OMG Unified Modeling Language (OMG UML). http://www.omg.org/spec/UML/Current NR9 IETF/RFC 2046 Multipurpose Internet Mail Extensions (MIME) Part Two: Media Types. https://www.ietf.org/rfc/rfc2046.txt NR10 IETF/RFC 3986 Uniform Resource Identifier (URI): Generic Syntax. January 2005. http://tools.ietf.org/html/rfc3986 NR11 IETF/RFC 3987 Internationalized Resource Identifiers (IRIs). January 2005. http://tools.ietf.org/html/rfc3987 NR12 IETF/RFC 5147 URI Fragment Identifiers for the text/plain Media Type. April 2008. http://tools.ietf.org/html/rfc5147 NR13 W3C/TR xptr-framework XPointer Framework. W3C Recommendation, 25 March 2003.
    [Show full text]
  • Deep Linking Desktop Resources
    Deep Linking Desktop Resources Markus Schr¨oder,Christian Jilek, and Andreas Dengel 1 Smart Data & Knowledge Services Dept., DFKI GmbH, Kaiserslautern, Germany 2 Computer Science Dept., TU Kaiserslautern, Germany fmarkus.schroeder, christian.jilek, [email protected] Abstract. Deep Linking is the process of referring to a specific piece of web content. Although users can browse their files in desktop environ- ments, they are unable to directly traverse deeper into their content using deep links. In order to solve this issue, we demonstrate \DeepLinker", a tool which generates and interprets deep links to desktop resources, thus enabling the reference to a certain location within a file using a simple hyperlink. By default, the service responds with an HTML representation of the resource along with further links to follow. Additionally, we allow the use of RDF to interlink our deep links with other resources. Keywords: Deep Link, Desktop, URI, Fragment Identifier, RDF 1 Introduction Internet resources are commonly identified using URIs3 which contain the in- formation to look them up. In this context, \deep linking"4 is the process of using a hyperlink to link to a specific piece of web content. For example, https://en.wikipedia.org/wiki/Deep_linking#Example refers to the Exam- ple section of Wikipedia's Deep Linking page. Similarly, desktop resources, like files (in various formats5) and folders, are typically addressed with their absolute paths or file URI schemata6. Although users can browse a file in this way, they are unable to directly descend deeper into its content. As a result, users cannot refer to a certain location deep inside their desktop resources, for example, refer- ring to a shape in a slide of a presentation.
    [Show full text]
  • Open Modeling and Exchange (OMEX) Metadata Specification Version
    Journal of Integrative Bioinformatics 2020; 17(2–3): 20200020 Maxwell L. Neal*, John H. Gennari, Dagmar Waltemath, David P. Nickerson and Matthias König Open modeling and exchange (OMEX) metadata specification version 1.0 https://doi.org/10.1515/jib-2020-0020 Received April 16, 2020; accepted April 26, 2020; published online June 25, 2020 Abstract: A standardized approach to annotating computational biomedical models and their associated files can facilitate model reuse and reproducibility among research groups, enhance search and retrieval of models and data, and enable semantic comparisons between models. Motivated by these potential benefits and guided by consensus across the COmputational Modeling in BIology NEtwork (COMBINE) community, we have developed a specification for encoding annotations in Open Modeling and EXchange (OMEX)-formatted ar- chives. Distributing modeling projects within these archives is a best practice established by COMBINE, and the OMEX metadata specification presented here provides a harmonized, community-driven approach for anno- tating a variety of standardized model and data representation formats within an archive. The specification primarily includes technical guidelines for encoding archive metadata, so that software tools can more easily utilize and exchange it, thereby spurring broad advancements in model reuse, discovery, and semantic analyses. Keywords: data integration; metadata; semantics; simulation. *Corresponding author: Maxwell L. Neal, Seattle Children’s Research Institute, Seattle, USA, E-mail: [email protected]. https://orcid.org/0000-0002-2390-6572 John H. Gennari: University of Washington, Seattle, USA. https://orcid.org/0000-0001-8254-4957 Dagmar Waltemath: University Medicine Greifswald, Greifswald, Germany. https://orcid.org/0000-0002-5886-5563 David P. Nickerson: University of Auckland, Auckland, New Zealand.
    [Show full text]
  • Mediacentral | Asset Management Swodl Reference, Created 9/20/2020
    MediaCentral® | Asset Management SWoDL Reference Release 2020.9 Contents Using This Reference ....................................................................... 6 Symbols and Conventions .......................................................................................... 6 If You Need Help ......................................................................................................... 7 Avid Training Services ................................................................................................ 7 1 Introduction ....................................................................................... 8 2 Basic Language Elements ................................................................ 9 Command Separator, Identifiers ................................................................................. 9 Comments ................................................................................................................. 10 Status Definitions ...................................................................................................... 10 Literals ....................................................................................................................... 11 String Literals ....................................................................................................... 11 Integer Literals ..................................................................................................... 12 Floating-Point Literals .........................................................................................
    [Show full text]
  • A URI-Based Approach for Addressing Fragments of Media Resources on the Web
    Noname manuscript No. (will be inserted by the editor) A URI-Based Approach for Addressing Fragments of Media Resources on the Web Erik Mannens · Davy Van Deursen · Rapha¨elTroncy · Silvia Pfeiffer · Conrad Parker · Yves Lafon · Jack Jansen · Michael Hausenblas · Rik Van de Walle Received: date / Accepted: date E. Mannens Ghent University - IBBT ELIS - Multimedia Lab Ghent, Belgium E-mail: [email protected] D. Van Deursen E-mail: [email protected] Ghent University - IBBT ELIS - Multimedia Lab Ghent, Belgium E-mail: [email protected] R. Troncy EURECOM, Multimedia Communications Department Sophia Antipolis, France E-mail: [email protected] S. Pfeiffer Vquence Sydney, Australia E-mail: silviapfeiff[email protected] C. Parker Kyoto University Kyoto, Japan E-mail: [email protected] Y. Lafon W3C/ERCIM Sophia Antipolis, France E-mail: [email protected] J. Jansen CWI, Distributed Multimedia Languages and Infrastructures Amsterdam, Netherlands E-mail: [email protected] M. Hausenblas National University of Ireland, Digital Enterprise Research Institute - LiDRC Galway, Ireland E-mail: [email protected] R. Van de Walle E-mail: [email protected] Ghent University - IBBT 2 Abstract To make media resources a prime citizen on the Web, we have to go beyond simply replicating digital media files. The Web is based on hyperlinks between Web resources, and that includes hyperlinking out of resources (e.g. from a word or an image within a Web page) as well as hyperlinking into resources (e.g. fragment URIs into Web pages). To turn video and audio into hypervideo and hyperaudio, we need to enable hyperlinking into and out of them.
    [Show full text]
  • "Hyperlinking to Time Offsets: the Temporal URI Specification" Dr Silvia Pfeiffer Vquence, Xiph.Org and Annodex.Org
    Position Statement W3C Video Workshop 12/13th Dec 2007 "Hyperlinking to time offsets: The temporal URI specification" Dr Silvia Pfeiffer Vquence, Xiph.org and Annodex.org Background Since the year 2000 a project under the name of "Continuous Media Web", CMWeb, has explored how to make video (and incidentally audio) a first class citizen on the Web. One of the key challenges to overcome was the addressing of time offsets and time sections of a media file. This position paper explains the challenges and how the current specification has overcome them, such that the W3C Video Workshop can learn from the experiences and build on them for going forward. Requirements for temporal URIs (Uniform Resource Identifiers) URIs are hyperlinks pointing at Web resources. With media files consisting of time- continuous data where the time component has a very strong meaning, URIs must be defined which take the time dimension into account. In fact, the spatial dimension is another component that should be addressable through URIs, but we are not addressing this in this paper. A standard means of addressing time offsets into media Web resources is required so we can point people directly at a specific highlight inside a video. In addition, a standard means of addressing time segments inside media Web resources is required such that we can extract and reuse such components to create dynamic mash-ups simply through providing a list of URIs, for example as a list of search results. The quest for a standard means of addressing In the CMWeb project, we intended to develop a standard means of temporal URI addressing for any type of time-continuous Web resource.
    [Show full text]
  • 3275 Motorola Obsoletes: 3075 J
    Network Working Group D. Eastlake 3rd Request for Comments: 3275 Motorola Obsoletes: 3075 J. Reagle Category: Standards Track W3C D. Solo Citigroup March 2002 (Extensible Markup Language) XML-Signature Syntax and Processing Status of this Memo This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the "Internet Official Protocol Standards" (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited. Copyright Notice Copyright (c) 2002 The Internet Society & W3C (MIT, INRIA, Keio), All Rights Reserved. Abstract This document specifies XML (Extensible Markup Language) digital signature processing rules and syntax. XML Signatures provide integrity, message authentication, and/or signer authentication services for data of any type, whether located within the XML that includes the signature or elsewhere. Table of Contents 1. Introduction................................................... 3 1.1 Editorial and Conformance Conventions......................... 4 1.2 Design Philosophy............................................. 4 1.3 Versions, Namespaces and Identifiers.......................... 4 1.4 Acknowledgements.............................................. 6 1.5 W3C Status.................................................... 6 2. Signature Overview and Examples................................ 7 2.1 Simple Example (Signature, SignedInfo, Methods, and References) 8 2.1.1
    [Show full text]
  • A Multimedia System to Produce and Deliver Video Fragments on Demand on Parliamentary Websites
    A Multimedia System to Produce and Deliver Video Fragments on Demand on Parliamentary Websites Elena Sánchez-Nielsen 1, Francisco Chávez-Gutiérrez 1,2, Javier Lorenzo-Navarro 3, Modesto Castrillón-Santana 3 1Departamento de Ingeniería Informática y de Sistemas – Universidad de La Laguna, 38271 Santa Cruz de Tenerife, Spain [email protected] 2Parlamento de Canarias, Teobaldo Power 7, 38002 Santa Cruz de Tenerife, Spain [email protected] 3SIANI – Universidad de Las Palmas de Gran Canaria, 35017 Gran Canaria, Spain {javier.lorenzo, modesto.castrillon} @ulpgc.es Abstract Parliamentary websites have become one of the most important windows for citizens and media to follow the activities of their legislatures and to hold parliaments to account. Therefore, most parliamentary institutions aim to provide new multimedia solutions capable of displaying video fragments on demand on plenary activities. This paper presents a multimedia system for parliamentary institutions to produce video fragments on demand through a website with linked information and public feedback that helps to explain the content shown in these fragments. A prototype implementation has been developed for the Canary Islands Parliament (Spain) and shows how traditional parliamentary streaming systems can be enhanced by the use of semantics and computer vision for video analytics. The semantic web technologies used make search capabilities on parliamentary websites available to users to retrieve video fragments on demand with accurate and timely information. In addition, video analytic techniques enable the automation of identifying representative keyframes to be annotated by parliamentary experts. As a result, parliaments are able to enhance citizens’ access to information and ensure that these institutions are more open and accountable on their websites and; at the same time, the labor-intensive tasks of parliamentary experts are considerably reduced.
    [Show full text]