Overview of Annotation Creation: Processes & Tools
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
9.1.1 Lesson 5
NYS Common Core ELA & Literacy Curriculum Grade 9 • Module 1 • Unit 1 • Lesson 5 D R A F T 9.1.1 Lesson 5 Introduction In this lesson, students continue to practice reading closely through annotating text. Students will begin learning how to annotate text using four annotation codes and marking their thinking on the text. Students will reread the Stage 1 epigraph and narrative of "St. Lucy’s Home for Girls Raised by Wolves," to “Neither did they” (pp. 225–227). The text annotation will serve as a scaffold for teacher- led text-dependent questions. Students will participate in a discussion about the reasons for text annotation and its connection to close reading. After an introduction and modeling of annotation, students will practice annotating the text with a partner. As student partners practice annotation, they will discuss the codes and notes they write on the text. The annotation codes introduced in this lesson will be used throughout Unit 1. For the lesson assessment, students will use their annotation as a tool to find evidence to independently answer, in writing, one text-dependent question. For homework, students will continue to read their Accountable Independent Reading (AIR) texts, this time using a focus standard to guide their reading. Standards Assessed Standard(s) RL.9-10.1 Cite strong and thorough textual evidence to support analysis of what the text says explicitly as well as inferences drawn from the text. Addressed Standard(s) RL.9-10.3 Analyze how complex characters (e.g., those with multiple or conflicting motivations) develop over the course of a text, interact with other characters, advance the plot or develop the theme. -
A Super-Lightweight Annotation Tool for Experts
SLATE: A Super-Lightweight Annotation Tool for Experts Jonathan K. Kummerfeld Computer Science & Engineering University of Michigan Ann Arbor, MI, USA [email protected] Abstract we minimise the time cost of installation by imple- menting the entire system in Python using built-in Many annotation tools have been developed, covering a wide variety of tasks and pro- libraries. Third, we follow the Unix Tools Philos- viding features like user management, pre- ophy (Raymond, 2003) to write programs that do processing, and automatic labeling. However, one thing well, with flat text formats. In our case, all of these tools use Graphical User Inter- (1) the tool only does annotation, not tokenisation, faces, and often require substantial effort to automatic labeling, file management, etc, which install and configure. This paper presents a are covered by other tools, and (2) data is stored in new annotation tool that is designed to fill the a format that both people and command line tools niche of a lightweight interface for users with like grep can easily read. a terminal-based workflow. SLATE supports annotation at different scales (spans of char- SLATE supports annotation of items that are acters, tokens, and lines, or a document) and continuous spans of either characters, tokens, of different types (free text, labels, and links), lines, or documents. For all item types there are with easily customisable keybindings, and uni- three forms of annotation: labeling items with cat- code support. In a user study comparing with egories, writing free-text labels, or linking pairs other tools it was consistently the easiest to in- of items. -
Segmentability Differences Between Child-Directed and Adult-Directed Speech: a Systematic Test with an Ecologically Valid Corpus
Report Segmentability Differences Between Child-Directed and Adult-Directed Speech: A Systematic Test With an Ecologically Valid Corpus Alejandrina Cristia 1, Emmanuel Dupoux1,2,3, Nan Bernstein Ratner4, and Melanie Soderstrom5 1Dept d’Etudes Cognitives, ENS, PSL University, EHESS, CNRS 2INRIA an open access journal 3FAIR Paris 4Department of Hearing and Speech Sciences, University of Maryland 5Department of Psychology, University of Manitoba Keywords: computational modeling, learnability, infant word segmentation, statistical learning, lexicon ABSTRACT Previous computational modeling suggests it is much easier to segment words from child-directed speech (CDS) than adult-directed speech (ADS). However, this conclusion is based on data collected in the laboratory, with CDS from play sessions and ADS between a parent and an experimenter, which may not be representative of ecologically collected CDS and ADS. Fully naturalistic ADS and CDS collected with a nonintrusive recording device Citation: Cristia A., Dupoux, E., Ratner, as the child went about her day were analyzed with a diverse set of algorithms. The N. B., & Soderstrom, M. (2019). difference between registers was small compared to differences between algorithms; it Segmentability Differences Between Child-Directed and Adult-Directed reduced when corpora were matched, and it even reversed under some conditions. Speech: A Systematic Test With an Ecologically Valid Corpus. Open Mind: These results highlight the interest of studying learnability using naturalistic corpora Discoveries in Cognitive Science, 3, 13–22. https://doi.org/10.1162/opmi_ and diverse algorithmic definitions. a_00022 DOI: https://doi.org/10.1162/opmi_a_00022 INTRODUCTION Supplemental Materials: Although children are exposed to both child-directed speech (CDS) and adult-directed speech https://osf.io/th75g/ (ADS), children appear to extract more information from the former than the latter (e.g., Cristia, Received: 15 May 2018 2013; Shneidman & Goldin-Meadow,2012). -
Multimedia Corpora (Media Encoding and Annotation) (Thomas Schmidt, Kjell Elenius, Paul Trilsbeek)
Multimedia Corpora (Media encoding and annotation) (Thomas Schmidt, Kjell Elenius, Paul Trilsbeek) Draft submitted to CLARIN WG 5.7. as input to CLARIN deliverable D5.C3 “Interoperability and Standards” [http://www.clarin.eu/system/files/clarindeliverableD5C3_v1_5finaldraft.pdf] Table of Contents 1 General distinctions / terminology................................................................................................................................... 1 1.1 Different types of multimedia corpora: spoken language vs. speech vs. phonetic vs. multimodal corpora vs. sign language corpora......................................................................................................................................................... 1 1.2 Media encoding vs. Media annotation................................................................................................................... 3 1.3 Data models/file formats vs. Transcription systems/conventions.......................................................................... 3 1.4 Transcription vs. Annotation / Coding vs. Metadata ............................................................................................. 3 2 Media encoding ............................................................................................................................................................... 5 2.1 Audio encoding ..................................................................................................................................................... 5 2.2 -
Active Learning with a Human in the Loop
M T R 1 2 0 6 0 3 MITRETECHNICALREPORT Active Learning with a Human In The Loop Seamus Clancy Sam Bayer Robyn Kozierok November 2012 The views, opinions, and/or findings contained in this report are those of The MITRE Corpora- tion and should not be construed as an official Government position, policy, or decision, unless designated by other documentation. This technical data was produced for the U.S. Government under Contract No. W15P7T-12-C- F600, and is subject to the Rights in Technical Data—Noncommercial Items clause (DFARS) 252.227-7013 (NOV 1995) c 2013 The MITRE Corporation. All Rights Reserved. M T R 1 2 0 6 0 3 MITRETECHNICALREPORT Active Learning with a Human In The Loop Sponsor: MITRE Innovation Program Dept. No.: G063 Contract No.: W15P7T-12-C-F600 Project No.: 0712M750-AA Seamus Clancy The views, opinions, and/or findings con- Sam Bayer tained in this report are those of The MITRE Corporation and should not be construed as an Robyn Kozierok official Government position, policy, or deci- sion, unless designated by other documenta- tion. Approved for Public Release. November 2012 c 2013 The MITRE Corporation. All Rights Reserved. Abstract Text annotation is an expensive pre-requisite for applying data-driven natural language processing techniques to new datasets. Tools that can reliably reduce the time and money required to construct an annotated corpus would be of immediate value to MITRE’s sponsors. To this end, we have explored the possibility of using active learning strategies to aid human annotators in performing a basic named entity annotation task. -
Gold Standard Annotations for Preposition and Verb Sense With
Gold Standard Annotations for Preposition and Verb Sense with Semantic Role Labels in Adult-Child Interactions Lori Moon Christos Christodoulopoulos Cynthia Fisher University of Illinois at Amazon Research University of Illinois at Urbana-Champaign [email protected] Urbana-Champaign [email protected] [email protected] Sandra Franco Dan Roth Intelligent Medical Objects University of Pennsylvania Northbrook, IL USA [email protected] [email protected] Abstract This paper describes the augmentation of an existing corpus of child-directed speech. The re- sulting corpus is a gold-standard labeled corpus for supervised learning of semantic role labels in adult-child dialogues. Semantic role labeling (SRL) models assign semantic roles to sentence constituents, thus indicating who has done what to whom (and in what way). The current corpus is derived from the Adam files in the Brown corpus (Brown, 1973) of the CHILDES corpora, and augments the partial annotation described in Connor et al. (2010). It provides labels for both semantic arguments of verbs and semantic arguments of prepositions. The semantic role labels and senses of verbs follow Propbank guidelines (Kingsbury and Palmer, 2002; Gildea and Palmer, 2002; Palmer et al., 2005) and those for prepositions follow Srikumar and Roth (2011). The corpus was annotated by two annotators. Inter-annotator agreement is given sepa- rately for prepositions and verbs, and for adult speech and child speech. Overall, across child and adult samples, including verbs and prepositions, the κ score for sense is 72.6, for the number of semantic-role-bearing arguments, the κ score is 77.4, for identical semantic role labels on a given argument, the κ score is 91.1, for the span of semantic role labels, and the κ for agreement is 93.9. -
A Massively Parallel Corpus: the Bible in 100 Languages
Lang Resources & Evaluation DOI 10.1007/s10579-014-9287-y ORIGINAL PAPER A massively parallel corpus: the Bible in 100 languages Christos Christodouloupoulos • Mark Steedman Ó The Author(s) 2014. This article is published with open access at Springerlink.com Abstract We describe the creation of a massively parallel corpus based on 100 translations of the Bible. We discuss some of the difficulties in acquiring and processing the raw material as well as the potential of the Bible as a corpus for natural language processing. Finally we present a statistical analysis of the corpora collected and a detailed comparison between the English translation and other English corpora. Keywords Parallel corpus Á Multilingual corpus Á Comparative corpus linguistics 1 Introduction Parallel corpora are a valuable resource for linguistic research and natural language processing (NLP) applications. One of the main uses of the latter kind is as training material for statistical machine translation (SMT), where large amounts of aligned data are standardly used to learn word alignment models between the lexica of two languages (for example, in the Giza?? system of Och and Ney 2003). Another interesting use of parallel corpora in NLP is projected learning of linguistic structure. In this approach, supervised data from a resource-rich language is used to guide the unsupervised learning algorithm in a target language. Although there are some techniques that do not require parallel texts (e.g. Cohen et al. 2011), the most successful models use sentence-aligned corpora (Yarowsky and Ngai 2001; Das and Petrov 2011). C. Christodouloupoulos (&) Department of Computer Science, UIUC, 201 N. -
The Influence of Text Annotation Tools on Print and Digital Reading Comprehension
28 The influence of text annotation tools on print and digital reading comprehension The Influence of Text Annotation Tools on Print and Digital Reading Comprehension Gal Ben-Yehudah Yoram Eshet-Alkalai The Open University of Israel The Open University of Israel [email protected] [email protected] Abstract Recent studies that compared reading comprehension between print and digital displays found a disadvantage for digital reading. A separate line of research has shown that effective use of learning strategies and active learning tools (e.g. annotation tools: highlighting, underlining and note- taking) can improve reading comprehension. Here, the influence of annotation tools on comprehension and their utility in diminishing the gap between print and digital reading was investigated. In a 2 X 2 experimental design, ninety three undergraduates (mean age 29.6 years, 72 women) read an academic essay in print or in digital format, with or without the use of annotation tools. In general, the results reinforce previous research reports on the inferiority of digital compared to print reading comprehension. Our findings indicate that for factual-level questions using annotation tools had no effect on comprehension in both formats. For the inference-level questions, however, use of annotation tools improved comprehension of printed text but not of digital text. In other words, text-annotation in the digital format had no effect on reading comprehension. In the digital era, digital reading is the norm rather than the exception; thus, it is essential to improve our understanding of the factors that influence the comprehension of digital text. Keywords: digital reading, print reading, annotation tools, learning strategies, self regulated learning, monitoring. -
A Multi-Layer Markup Language for Geospatial Semantic Annotations Ludovic Moncla, Mauro Gaio
A multi-layer markup language for geospatial semantic annotations Ludovic Moncla, Mauro Gaio To cite this version: Ludovic Moncla, Mauro Gaio. A multi-layer markup language for geospatial semantic annotations. 9th Workshop on Geographic Information Retrieval, Nov 2015, Paris, France. 10.1145/2837689.2837700. hal-01256370 HAL Id: hal-01256370 https://hal.archives-ouvertes.fr/hal-01256370 Submitted on 1 Jan 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. A Multi-Layer Markup Language for Geospatial Semantic Annotations ∗ † Ludovic Moncla Mauro Gaio Universidad de Zaragoza LIUPPA - EA 3000 C/ María de Luna, 1 - Zaragoza, Spain Av. de l’Université - Pau, France [email protected] [email protected] ABSTRACT 1 Motivation and Background In this paper we describe a markup language for semantically Two categories of markup languages can be considered in annotating raw texts. We define a formal representation of geospatial tasks, those dedicated to the encoding of spatial text documents written in natural language that can be ap- data (e.g., gml, kml) and those dedicated to the annota- plied for the task of Named Entities Recognition and Spatial tion of spatial or spatio-temporal information in texts, such Role Labeling. -
The Relationship Between Transitivity and Caused Events in the Acquisition of Emotion Verbs
Love Is Hard to Understand: The Relationship Between Transitivity and Caused Events in the Acquisition of Emotion Verbs The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters. Hartshorne, Joshua K., Amanda Pogue, and Jesse Snedeker. 2014. Citation Love Is Hard to Understand: The Relationship Between Transitivity and Caused Events in the Acquisition of Emotion Verbs. Journal of Child Language (June 19): 1–38. Published Version doi:10.1017/S0305000914000178 Accessed January 17, 2017 12:55:19 PM EST Citable Link http://nrs.harvard.edu/urn-3:HUL.InstRepos:14117738 This article was downloaded from Harvard University's DASH Terms of Use repository, and is made available under the terms and conditions applicable to Open Access Policy Articles, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#OAP (Article begins on next page) Running head: TRANSITIVITY AND CAUSED EVENTS Love is hard to understand: The relationship between transitivity and caused events in the acquisition of emotion verbs Joshua K. Hartshorne Massachusetts Institute of Technology Harvard University Amanda Pogue University of Waterloo Jesse Snedeker Harvard University In press at Journal of Child Language Acknowledgements: The authors wish to thank Timothy O’Donnell for assistance with the corpus analysis as well as Alfonso Caramazza, Susan Carey, Steve Pinker, Mahesh Srinivasan, Nathan Winkler- Rhoades, Melissa Kline, Hugh Rabagliati, members of the Language and Cognition workshop, and three anonymous reviewers for comments and discussion. This material is based on work supported by a National Defense Science and Engineering Graduate Fellowship to JKH and a grant from the National Science Foundation to Jesse Snedeker (0623845). -
BOEMIE Ontology-Based Text Annotation Tool
BOEMIE ontology-based text annotation tool Pavlina Fragkou, Georgios Petasis, Aris Theodorakos, Vangelis Karkaletsis, Constantine D. Spyropoulos Software and Knowledge Engineering Laboratory Institute of Informatics and Telecommunications National Center for Scientific Research (N.C.S.R.) “Demokritos” P. Grigoriou and Neapoleos Str., 15310 Aghia Paraskevi Attikis, Greece E-mail: {fragou, petasis, artheo, vangelis, costass}@ iit.demokritos.gr Abstract The huge amount of the available information in the Web creates the need of effective information extraction systems that are able to produce metadata that satisfy user’s information needs. The development of such systems, in the majority of cases, depends on the availability of an appropriately annotated corpus in order to learn extraction models. The production of such corpora can be significantly facilitated by annotation tools that are able to annotate, according to a defined ontology, not only named entities but most importantly relations between them. This paper describes the BOEMIE ontology-based annotation tool which is able to locate blocks of text that correspond to specific types of named entities, fill tables corresponding to ontology concepts with those named entities and link the filled tables based on relations defined in the domain ontology. Additionally, it can perform annotation of blocks of text that refer to the same topic. The tool has a user-friendly interface, supports automatic pre-annotation, annotation comparison as well as customization to other annotation schemata. The annotation tool has been used in a large scale annotation task involving 3000 web pages regarding athletics. It has also been used in another annotation task involving 503 web pages with medical information, in different languages. -
PDF Annotation with Labels and Structure
PAWLS: PDF Annotation With Labels and Structure Mark Neumann Zejiang Shen Sam Skjonsberg Allen Institute for Artificial Intelligence {markn,shannons,sams}@allenai.org Abstract the extracted text stream lacks any organization or markup, such as paragraph boundaries, figure Adobe’s Portable Document Format (PDF) is placement and page headers/footers. a popular way of distributing view-only docu- Existing popular annotation tools such as BRAT ments with a rich visual markup. This presents a challenge to NLP practitioners who wish to (Stenetorp et al., 2012) focus on annotation of user use the information contained within PDF doc- provided plain text in a web browser specifically uments for training models or data analysis, designed for annotation only. For many labeling because annotating these documents is diffi- tasks, this format is exactly what is required. How- cult. In this paper, we present PDF Annota- ever, as the scope and ability of natural language tion with Labels and Structure (PAWLS), a new processing technology goes beyond purely textual annotation tool designed specifically for the processing due in part to recent advances in large PDF document format. PAWLS is particularly suited for mixed-mode annotation and scenar- language models (Peters et al., 2018; Devlin et al., ios in which annotators require extended con- 2019, inter alia), the context and media in which text to annotate accurately. PAWLS supports datasets are created must evolve as well. span-based textual annotation, N-ary relations In addition, the quality of both data collection and freeform, non-textual bounding boxes, all and evaluation methodology is highly dependent of which can be exported in convenient for- on the particular annotation/evaluation context in mats for training multi-modal machine learn- which the data being annotated is viewed (Joseph ing models.