Interchange Formats for Visualization: LIF and MMIF

Interchange Formats for Visualization: LIF and MMIF

Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020), pages 7230–7237 Marseille, 11–16 May 2020 c European Language Resources Association (ELRA), licensed under CC-BY-NC Interchange Formats for Visualization: LIF and MMIF Kyeongmin Rim∗, Kelley Lynch∗, Marc Verhagen∗, Nancy Idey, James Pustejovsky∗ ∗ Brandeis University, Department of Computer Science fkrim,[email protected], [email protected], [email protected] y Vassar College, Department of Computer Science [email protected] Abstract Promoting interoperrable computational linguistics (CL) and natural language processing (NLP) application platforms and interchange- able data formats have contributed improving discoverabilty and accessbility of the openly available NLP software. In this paper, we discuss the enhanced data visualization capabilities that are also enabled by inter-operating NLP pipelines and interchange formats. For adding openly available visualization tools and graphical annotation tools to the Language Applications Grid (LAPPS Grid) and Computational Linguistics Applications for Multimedia Services (CLAMS) toolboxes, we have developed interchange formats that can carry annotations and metadata for text and audiovisual source data. We descibe those data formats and present case studies where we successfully adopt open-source visualization tools and combine them with CL tools. Keywords: Interchange formats, Interoperability, Data visualization, Multi-media annotation 1. Introduction tion of documents by being exported for indexing in docu- In this paper, we discuss the enhanced data visualization ca- ment indexing tools (ELK or Solr) and associated front-end pabilities enabled by the open-source natural language pro- web interfaces (such as Kibana) that support per-index vi- cessing (NLP) tools (either separately or pipelined in appli- sualization . cation workflows) as well as openly available visualization 2. Interchange Formats and Interoperability toolkits. While popular text-based search and navigation applications such as ElasticSearch+Kibana stack (ELK)1 Since we use interchange formats as the mediators between or Solr2 are beginning to provide some Human Language data and their visualizations we describe here the two for- Technology (HLT)-enabled indexing that can then be vi- mats that we have used in our work. sualized (e.g., named entity recognition (NER)), we ar- 2.1. LAPPS Interchange Format gue that a more robust and generic capability for visual- izing data is enabled by adopting common rich and flexi- As a data driven research community, it has been one ble interchange formats from which the visualization can of most important goals shared among the computational be mapped. To this end, we describe the use of the LAPPS linguistics (CL) community to have a common data for- Interchange Format (LIF) and the Multi-Media Interchange mat that can be used in different data processing projects. Format (MMIF) as pivots from which visualizations can be Within the CL community, the Unstructured Information driven. Management Applications (UIMA) framework (Ferrucci et Specifically, we discuss two different scenarios for the vi- al., 2009) and the General Architecture for Text Engineer- sualization of data: (i) visualization of an individual docu- ing (GATE) (Cunningham et al., 2013) have been served ment and its annotations; and (ii) visualization of a collec- as well-established and popular tool-chaining platforms for tion of documents that is processed offline in order to ul- researchers and NLP developers. Although GATE focuses timately provide a display of collection-wide phenomena. primarily on textual data, UIMA provides an extremely We demonstrate how visualization of an individual docu- general model of type systems and annotations that can ment and its annotations is performed by converting be- be applied to multimedia source data. However, there is a tween pivoting interchange formats (specifically LIF and/or steep learning curve supporting UIMA’s generality, due in MMIF) and application specific formats. Using interchange large part to its tight binding to XML syntax and the Java formats enables the chaining of tools into a pipeline which programming language. More recently, web-based work- generates automatic annotations that can then be visual- flow engines such as the LAPPS Grid (Ide et al., 2014) ized or modified by a human-in-the-loop in multiple an- and WebLicht (Hinrichs et al., 2010) have been devel- notation environments. Additionally, we describe the func- oped that provide user-friendly web interfaces for chaining tionality enabled by using LIF and MMIF as interchange NLP tools. These platforms not only offer tool repositories formats within the Language Applications Grid (LAPPS containing state-of-the-art NLP tools for annotating textual Grid)-Galaxy and Computational Linguistics Applications data at a variety of linguistic levels (e.g., CoreNLP (Man- for Multimedia Services (CLAMS) respectively. The sec- ning et al., 2014), OpenNLP (OpenNLP, 2017), UDPipe ond scenario involves the visualization of data characteris- (Straka and Strakova,´ 2017)), but also provide open source tics that inhere over an entire collection. We illustrate how software development kits (SDKs) for tool developers in MMIF and LIF can facilitate visualization across a collec- order to promote adoption. The LAPPS Grid and WebLicht both provide for chaining tools from different developers, 1https://www.elastic.co/ which use a variety of I/O formats, by virtue of underlying 2https://lucene.apache.org/solr/ data interchange formats that impose a common I/O format 7230 Figure 1: Named entity annotations in the LAPPS Grid, generated by LINDAT/CLARIN’s NameTag 4 tool and visualized with Brat. among those tools. The LAPPS Grid uses LIF (Verhagen "id": "c0", et al., 2015), a JSON-LD serialization, as its interchange "start": 24, format; while WebLicht uses its XML-based Text Corpus "end": 38, Format (TCF) (Heid et al., 2010). Additionally, the LAPPS "features": { Grid defines a linked data vocabulary that ensures semantic "category": "PERSON" } } interoperability (Ide et al., 2015). Beyond in-platform inter- operability, the LAPPS Grid has established multi-platform interoperability between LAPPS Grid and two CLARIN The @type attribute’s value is a shorthand for the full form platforms (Hinrichs et al., 2018) as well as several other http://vocab.lappsgrid.org/NamedEntity platforms (e.g., DKPro (Eckart de Castilho and Gurevych, which contains the definition of Named Entity anno- 2014), PubAnnotation (Kim and Wang, 2012), and INCEp- tation type. The annotation is anchored in the text, and TION (Klie et al., 2018)). its feature dictionary gives the information relevant for Figure 1 shows a visualization of named entity annotations named entity annotations. All annotations in LIF follow in the LAPPS Grid, using Brat (Stenetorp et al., 2012). All this format. annotations are represented in the LIF format and linked either to offsets within read-only primary data or to other 2.2. Multi-Media Interchange Format annotation layers. Within the LIF document containing Recent developments driven by advancements in high data the annotations, each annotation references a name (e.g., throughput and machine learning algorithms have brought PERSON), possibly coupled with additional attributes, that impressive boosts in performance of not only NLP, but also links to a full definition in the LAPPS Grid Web Service computer vision (CV), and speech technologies processing Exchange Vocabulary (WSEV)5. Alternative names used audio and video data. The machine learning approach to within specific tools are mapped to the WSEV in order solving problems is data-driven, and most state-of-the-art to ensure semantic consistency among tools from different applications are based on supervised algorithms which rely sources. on large sets of training data. To ensure the high qual- Although it is not shown in the trimmed-down screen- ity of datasets containing rich multimodal annotations, the shot in fig.1, the views list contains a view generated by CL community has had to move beyond text-only anno- NameTag and that view includes a list of annotation where tation practices, and has tried to establish a common for- one looks like the fragment below. mat for multimodal annotations, particularly with regard { to annotating speech in audio and gestures in video. For "@type": "NamedEntity", example, (Schmidt et al., 2008) outline a diversity of an- notation applications and formats, as well as the commu- 5https://vocab.lappsgrid.org/. Note the refer- nity effort to develop an interoperable format that carries ence in the @context field of the LIF tool output. complicated, layered multimodal annotation. As a result, 7231 Figure 2: A primary data text collection can be created from an image and can then be input to downstream NLP compo- nents. UIMA and Component MetaData Infrastructure (CMDI) Specifically, multimodal annotations in MMIF are first cat- (Broeder et al., 2012) have been widely adopted frame- egorized by the anchor type on which the annotation is works that provide interoperable multimodal information placed. That is, an annotation can be placed on: (1) charac- exchange between computational analysis tools, and for ter offsets of a text; (2) time segments of time-based media; metadata repositories for discoverability, respectively. (3) two-dimensional (w × h) or three-dimensional (w × h× More recently, the International Image Interoperability duration) bounding boxes

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us