Natural Language Annotation for Machine Learning

Total Page:16

File Type:pdf, Size:1020Kb

Natural Language Annotation for Machine Learning Natural Language Annotation for Machine Learning James Pustejovsky and Amber Stubbs Natural Language Annotation for Machine Learning by James Pustejovsky and Amber Stubbs Copyright © 2013 James Pustejovsky and Amber Stubbs. All rights reserved. Printed in the United States of America. Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. O’Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (http://my.safaribooksonline.com). For more information, contact our corporate/ institutional sales department: 800-998-9938 or [email protected]. Editors: Julie Steele and Meghan Blanchette Proofreader: Linley Dolby Production Editor: Kristen Borg Indexer: WordCo Indexing Services Copyeditor: Audrey Doyle Cover Designer: Randy Comer Interior Designer: David Futato Illustrator: Rebecca Demarest October 2012: First Edition Revision History for the First Edition: 2012-10-10 First release See http://oreilly.com/catalog/errata.csp?isbn=9781449306663 for release details. Nutshell Handbook, the Nutshell Handbook logo, and the O’Reilly logo are registered trademarks of O’Reilly Media, Inc. Natural Language Annotation for Machine Learning, the image of a cockatiel, and related trade dress are trademarks of O’Reilly Media, Inc. Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and O’Reilly Media, Inc., was aware of a trade­ mark claim, the designations have been printed in caps or initial caps. While every precaution has been taken in the preparation of this book, the publisher and authors assume no responsibility for errors or omissions, or for damages resulting from the use of the information contained herein. ISBN: 978-1-449-30666-3 [LSI] Table of Contents Preface. ix 1. The Basics. 1 The Importance of Language Annotation 1 The Layers of Linguistic Description 3 What Is Natural Language Processing? 4 A Brief History of Corpus Linguistics 5 What Is a Corpus? 8 Early Use of Corpora 10 Corpora Today 13 Kinds of Annotation 14 Language Data and Machine Learning 21 Classification 22 Clustering 22 Structured Pattern Induction 22 The Annotation Development Cycle 23 Model the Phenomenon 24 Annotate with the Specification 27 Train and Test the Algorithms over the Corpus 29 Evaluate the Results 30 Revise the Model and Algorithms 31 Summary 31 2. Defining Your Goal and Dataset. 33 Defining Your Goal 33 The Statement of Purpose 34 Refining Your Goal: Informativity Versus Correctness 35 Background Research 41 Language Resources 41 Organizations and Conferences 42 iii NLP Challenges 43 Assembling Your Dataset 43 The Ideal Corpus: Representative and Balanced 45 Collecting Data from the Internet 46 Eliciting Data from People 46 The Size of Your Corpus 48 Existing Corpora 48 Distributions Within Corpora 49 Summary 51 3. Corpus Analytics. 53 Basic Probability for Corpus Analytics 54 Joint Probability Distributions 55 Bayes Rule 57 Counting Occurrences 58 Zipf’s Law 61 N-grams 61 Language Models 63 Summary 65 4. Building Your Model and Specification. 67 Some Example Models and Specs 68 Film Genre Classification 70 Adding Named Entities 71 Semantic Roles 72 Adopting (or Not Adopting) Existing Models 75 Creating Your Own Model and Specification: Generality Versus Specificity 76 Using Existing Models and Specifications 78 Using Models Without Specifications 79 Different Kinds of Standards 80 ISO Standards 80 Community-Driven Standards 83 Other Standards Affecting Annotation 83 Summary 84 5. Applying and Adopting Annotation Standards. 87 Metadata Annotation: Document Classification 88 Unique Labels: Movie Reviews 88 Multiple Labels: Film Genres 90 Text Extent Annotation: Named Entities 94 Inline Annotation 94 Stand-off Annotation by Tokens 96 iv | Table of Contents Stand-off Annotation by Character Location 99 Linked Extent Annotation: Semantic Roles 101 ISO Standards and You 102 Summary 103 6. Annotation and Adjudication. 105 The Infrastructure of an Annotation Project 105 Specification Versus Guidelines 108 Be Prepared to Revise 109 Preparing Your Data for Annotation 110 Metadata 110 Preprocessed Data 110 Splitting Up the Files for Annotation 111 Writing the Annotation Guidelines 112 Example 1: Single Labels—Movie Reviews 113 Example 2: Multiple Labels—Film Genres 115 Example 3: Extent Annotations—Named Entities 119 Example 4: Link Tags—Semantic Roles 120 Annotators 122 Choosing an Annotation Environment 124 Evaluating the Annotations 126 Cohen’s Kappa (κ) 127 Fleiss’s Kappa (κ) 128 Interpreting Kappa Coefficients 131 Calculating κ in Other Contexts 132 Creating the Gold Standard (Adjudication) 134 Summary 135 7. Training: Machine Learning. 139 What Is Learning? 140 Defining Our Learning Task 142 Classifier Algorithms 144 Decision Tree Learning 145 Gender Identification 147 Naïve Bayes Learning 151 Maximum Entropy Classifiers 157 Other Classifiers to Know About 158 Sequence Induction Algorithms 160 Clustering and Unsupervised Learning 162 Semi-Supervised Learning 163 Matching Annotation to Algorithms 165 Table of Contents | v Summary 166 8. Testing and Evaluation. 169 Testing Your Algorithm 170 Evaluating Your Algorithm 170 Confusion Matrices 171 Calculating Evaluation Scores 172 Interpreting Evaluation Scores 177 Problems That Can Affect Evaluation 178 Dataset Is Too Small 178 Algorithm Fits the Development Data Too Well 180 Too Much Information in the Annotation 181 Final Testing Scores 181 Summary 182 9. Revising and Reporting. 185 Revising Your Project 186 Corpus Distributions and Content 186 Model and Specification 187 Annotation 188 Training and Testing 189 Reporting About Your Work 189 About Your Corpus 191 About Your Model and Specifications 192 About Your Annotation Task and Annotators 192 About Your ML Algorithm 193 About Your Revisions 194 Summary 194 10. Annotation: TimeML. 197 The Goal of TimeML 198 Related Research 199 Building the Corpus 201 Model: Preliminary Specifications 201 Times 202 Signals 202 Events 203 Links 203 Annotation: First Attempts 204 Model: The TimeML Specification Used in TimeBank 204 Time Expressions 204 Events 205 vi | Table of Contents Signals 206 Links 207 Confidence 208 Annotation: The Creation of TimeBank 209 TimeML Becomes ISO-TimeML 211 Modeling the Future: Directions for TimeML 213 Narrative Containers 213 Expanding TimeML to Other Domains 215 Event Structures 216 Summary 217 11. Automatic Annotation: Generating TimeML. 219 The TARSQI Components 220 GUTime: Temporal Marker Identification 221 EVITA: Event Recognition and Classification 222 GUTenLINK 223 Slinket 224 SputLink 225 Machine Learning in the TARSQI Components 226 Improvements to the TTK 226 Structural Changes 227 Improvements to Temporal Entity Recognition: BTime 227 Temporal Relation Identification 228 Temporal Relation Validation 229 Temporal Relation Visualization 229 TimeML Challenges: TempEval-2 230 TempEval-2: System Summaries 231 Overview of Results 234 Future of the TTK 234 New Input Formats 234 Narrative Containers/Narrative Times 235 Medical Documents 236 Cross-Document Analysis 237 Summary 238 12. Afterword: The Future of Annotation. 239 Crowdsourcing Annotation 239 Amazon’s Mechanical Turk 240 Games with a Purpose (GWAP) 241 User-Generated Content 242 Handling Big Data 243 Boosting 243 Table of Contents | vii Active Learning 244 Semi-Supervised Learning 245 NLP Online and in the Cloud 246 Distributed Computing 246 Shared Language Resources 247 Shared Language Applications 247 And Finally... 248 A. List of Available Corpora and Specifications. 249 B. List of Software Resources. 271 C. MAE User Guide. 291 D. MAI User Guide. 299 E. Bibliography. 305 Index. 317 viii | Table of Contents Preface This book is intended as a resource for people who are interested in using computers to help process natural language. A natural language refers to any language spoken by humans, either currently (e.g., English, Chinese, Spanish) or in the past (e.g., Latin, ancient Greek, Sanskrit). Annotation refers to the process of adding metadata informa­ tion to the text in order to augment a computer’s capability to perform Natural Language Processing (NLP). In particular, we examine how information can be added to natural language text through annotation in order to increase the performance of machine learning algorithms—computer programs designed to extrapolate rules from the infor­ mation provided over texts in order to apply those rules to unannotated texts later on. Natural Language Annotation for Machine Learning This book details the multistage process for building your own annotated natural lan­ guage dataset (known as a corpus) in order to train machine learning (ML) algorithms for language-based data and knowledge discovery. The overall goal of this book is to show readers how to create their own corpus, starting with selecting an annotation task, creating the annotation specification, designing the guidelines, creating a “gold stan­ dard” corpus, and then beginning the actual data creation with the annotation process. Because the annotation process is not linear, multiple iterations can be required for defining the tasks, annotations, and evaluations, in order to achieve the best results for a particular goal. The process can be summed up in terms of the MATTER Annotation Development Process: Model, Annotate, Train, Test, Evaluate, Revise. This book guides the reader through the cycle, and provides detailed examples and discussion
Recommended publications
  • Adapting to Trends in Language Resource Development: a Progress
    Adapting to Trends in Language Resource Development: A Progress Report on LDC Activities Christopher Cieri, Mark Liberman University of Pennsylvania, Linguistic Data Consortium 3600 Market Street, Suite 810, Philadelphia PA. 19104, USA E-mail: {ccieri,myl} AT ldc.upenn.edu Abstract This paper describes changing needs among the communities that exploit language resources and recent LDC activities and publications that support those needs by providing greater volumes of data and associated resources in a growing inventory of languages with ever more sophisticated annotation. Specifically, it covers the evolving role of data centers with specific emphasis on the LDC, the publications released by the LDC in the two years since our last report and the sponsored research programs that provide LRs initially to participants in those programs but eventually to the larger HLT research communities and beyond. creators of tools, specifications and best practices as well 1. Introduction as databases. The language resource landscape over the past two years At least in the case of LDC, the role of the data may be characterized by what has changed and what has center has grown organically, generally in response to remained constant. Constant are the growing demands stated need but occasionally in anticipation of it. LDC’s for an ever-increasing body of language resources in a role was initially that of a specialized publisher of LRs growing number of human languages with increasingly with the additional responsibility to archive and protect sophisticated annotation to satisfy an ever-expanding list published resources to guarantee their availability of the of user communities. Changing are the relative long term.
    [Show full text]
  • EN 300 468 V1.3.1 (1997-09) European Standard (Telecommunications Series)
    Draft EN 300 468 V1.3.1 (1997-09) European Standard (Telecommunications series) Digital Video Broadcasting (DVB); Specification for Service Information (SI) in DVB systems European Broadcasting Union Union Européenne de Radio-Télévision EBU UER European Telecommunications Standards Institute 2 Draft EN 300 468 V1.3.1 (1997-09) Reference REN/JTC-00DVB-43 (4c000j0o.PDF) Keywords DVB, broadcasting, digital, video, MPEG, TV ETSI Secretariat Postal address F-06921 Sophia Antipolis Cedex - FRANCE Office address 650 Route des Lucioles - Sophia Antipolis Valbonne - FRANCE Tel.: +33 4 92 94 42 00 Fax: +33 4 93 65 47 16 Siret N° 348 623 562 00017 - NAF 742 C Association à but non lucratif enregistrée à la Sous-Préfecture de Grasse (06) N° 7803/88 X.400 c= fr; a=atlas; p=etsi; s=secretariat Internet [email protected] http://www.etsi.fr Copyright Notification No part may be reproduced except as authorized by written permission. The copyright and the foregoing restriction extend to reproduction in all media. © European Telecommunications Standards Institute 1997. © European Broadcasting Union 1997. All rights reserved. 3 Draft EN 300 468 V1.3.1 (1997-09) Contents Intellectual Property Rights................................................................................................................................5 Foreword ............................................................................................................................................................5 1 Scope........................................................................................................................................................6
    [Show full text]
  • 9.1.1 Lesson 5
    NYS Common Core ELA & Literacy Curriculum Grade 9 • Module 1 • Unit 1 • Lesson 5 D R A F T 9.1.1 Lesson 5 Introduction In this lesson, students continue to practice reading closely through annotating text. Students will begin learning how to annotate text using four annotation codes and marking their thinking on the text. Students will reread the Stage 1 epigraph and narrative of "St. Lucy’s Home for Girls Raised by Wolves," to “Neither did they” (pp. 225–227). The text annotation will serve as a scaffold for teacher- led text-dependent questions. Students will participate in a discussion about the reasons for text annotation and its connection to close reading. After an introduction and modeling of annotation, students will practice annotating the text with a partner. As student partners practice annotation, they will discuss the codes and notes they write on the text. The annotation codes introduced in this lesson will be used throughout Unit 1. For the lesson assessment, students will use their annotation as a tool to find evidence to independently answer, in writing, one text-dependent question. For homework, students will continue to read their Accountable Independent Reading (AIR) texts, this time using a focus standard to guide their reading. Standards Assessed Standard(s) RL.9-10.1 Cite strong and thorough textual evidence to support analysis of what the text says explicitly as well as inferences drawn from the text. Addressed Standard(s) RL.9-10.3 Analyze how complex characters (e.g., those with multiple or conflicting motivations) develop over the course of a text, interact with other characters, advance the plot or develop the theme.
    [Show full text]
  • A Super-Lightweight Annotation Tool for Experts
    SLATE: A Super-Lightweight Annotation Tool for Experts Jonathan K. Kummerfeld Computer Science & Engineering University of Michigan Ann Arbor, MI, USA [email protected] Abstract we minimise the time cost of installation by imple- menting the entire system in Python using built-in Many annotation tools have been developed, covering a wide variety of tasks and pro- libraries. Third, we follow the Unix Tools Philos- viding features like user management, pre- ophy (Raymond, 2003) to write programs that do processing, and automatic labeling. However, one thing well, with flat text formats. In our case, all of these tools use Graphical User Inter- (1) the tool only does annotation, not tokenisation, faces, and often require substantial effort to automatic labeling, file management, etc, which install and configure. This paper presents a are covered by other tools, and (2) data is stored in new annotation tool that is designed to fill the a format that both people and command line tools niche of a lightweight interface for users with like grep can easily read. a terminal-based workflow. SLATE supports annotation at different scales (spans of char- SLATE supports annotation of items that are acters, tokens, and lines, or a document) and continuous spans of either characters, tokens, of different types (free text, labels, and links), lines, or documents. For all item types there are with easily customisable keybindings, and uni- three forms of annotation: labeling items with cat- code support. In a user study comparing with egories, writing free-text labels, or linking pairs other tools it was consistently the easiest to in- of items.
    [Show full text]
  • Standardisation Action Plan for Clarin
    Standardisation Action Plan for Clarin State: Proposal to CLARIN community Nuria Bel, Jonas Beskow, Lou Boves, Gerhard Budin, Nicoletta Calzolari, Khalid Choukri, Erhard Hinrichs, Steven Krauwer, Lothar Lemnitzer, Stelios Piperidis, Adam Przepiorkowski, Laurent Romary, Florian Schiel, Helmut Schmidt, Hans Uszkoreit, Peter Wittenburg August 2009 Summary This document describes a proposal for a Standardisation Action Plan (SAP) for the Clarin initiative in close synchronization with other relevant initiatives such as Flarenet, ELRA, ISO and TEI. While Flarenet is oriented towards a broader scope since it is also addressing standards that are typically used in industry, CLARIN wants to be more focussed in its statements to the research domain. Due to the overlap it is agreed that the Flarenet and CLARIN documents on standards need to be closely synchronized. This note covers standards that are generic (XML, UNICODE) as well as standards that are domain specific where naturally the LRT community has much more influence. This Standardization Action Plan wants to give an orientation for all practical work in CLARIN to achieve a harmonized domain of language resources and technology stepwise and therefore its core message is to overcome fragmentation. To meet these goals it wants to keep its message as simple as possible. A web-site will be established that will contain more information about examples, guidelines, explanations, tools, converters and training events such as summer schools. The organization of the document is as follows: • Chapter 1: Introduction to the topic. • Chapter 2: Recommended standards that CLARIN should endorse page 4 • Chapter 3: Standards that are emerging and relevant for CLARIN page 8 • Chapter 4: General guidelines that need to be followed page 12 • Chapter 5: Reference to community practices page 14 • Chapter 6: References This document tries to be short and will give comments, recommendations and discuss open issues for each of the standards.
    [Show full text]
  • Standards for Language Coding: the ISO 639 Family
    Standards for language coding: the ISO 639 family Rebecca Guenther Library of Congress Jan. 8, 2010 ISO Standards development !! ISO consists of Technical Committees (TC) with subcommittees (SC) !! ISO language coding standards are maintained by !! TC 37/SC2 (Terminology and other language and content resources ) !! TC 46/SC4 (Information and documentation) LSA Annual Meeting 2 ISO 639 standards !! ISO 639-1: 2-character codes (136 codes) !! ISO 639-2: 3-character codes (450+) !! ISO 639-3: 3-character codes (7700+) !! ISO 639-4: principles !! ISO 639-5: 3-character codes (114) !! ISO 639-6: 4-character codes (??) LSA Annual Meeting 3 ISO 639 Joint Advisory Committee !! Established to advise the RAs for ISO 639-1 and ISO 639-2 !! Rotating chairs: Infoterm (for TC37) and Library of Congress (for TC46) !! Committee consists of 3 members of each TC, representatives of each registration authority and up to 6 observers !! Coordinates development of different parts of ISO 639 LSA Annual Meeting 4 ISO 639 language coding principles !! Language codes are not changed for stability of standard !! If a language code is retired it is not reassigned to something else !! Programming languages are not in scope !! Only deals with languages; codes from other ISO standards may be added as needed for more granularity, e.g. country codes, script codes LSA Annual Meeting 5 ISO 639-1 !! First published 1967 !! Covers major languages of the world !! Alpha-2 codes; only 676 possible combinations !! Developed for use in terminology applications !! Consists of
    [Show full text]
  • Technical Study Desktop Internationalization
    Technical Study Desktop Internationalization NIC CH A E L T S T U D Y [This page intentionally left blank] X/Open Technical Study Desktop Internationalisation X/Open Company Ltd. December 1995, X/Open Company Limited All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the copyright owners. X/Open Technical Study Desktop Internationalisation X/Open Document Number: E501 Published by X/Open Company Ltd., U.K. Any comments relating to the material contained in this document may be submitted to X/Open at: X/Open Company Limited Apex Plaza Forbury Road Reading Berkshire, RG1 1AX United Kingdom or by Electronic Mail to: [email protected] ii X/Open Technical Study (1995) Contents Chapter 1 Internationalisation.............................................................................. 1 1.1 Introduction ................................................................................................. 1 1.2 Character Sets and Encodings.................................................................. 2 1.3 The C Programming Language................................................................ 5 1.4 Internationalisation Support in POSIX .................................................. 6 1.5 Internationalisation Support in the X/Open CAE............................... 7 1.5.1 XPG4 Facilities.........................................................................................
    [Show full text]
  • Characteristics of Text-To-Speech and Other Corpora
    Characteristics of Text-to-Speech and Other Corpora Erica Cooper, Emily Li, Julia Hirschberg Columbia University, USA [email protected], [email protected], [email protected] Abstract “standard” TTS speaking style, whether other forms of profes- sional and non-professional speech differ substantially from the Extensive TTS corpora exist for commercial systems cre- TTS style, and which features are most salient in differentiating ated for high-resource languages such as Mandarin, English, the speech genres. and Japanese. Speakers recorded for these corpora are typically instructed to maintain constant f0, energy, and speaking rate and 2. Related Work are recorded in ideal acoustic environments, producing clean, consistent audio. We have been developing TTS systems from TTS speakers are typically instructed to speak as consistently “found” data collected for other purposes (e.g. training ASR as possible, without varying their voice quality, speaking style, systems) or available on the web (e.g. news broadcasts, au- pitch, volume, or tempo significantly [1]. This is different from diobooks) to produce TTS systems for low-resource languages other forms of professional speech in that even with the rela- (LRLs) which do not currently have expensive, commercial sys- tively neutral content of broadcast news, anchors will still have tems. This study investigates whether traditional TTS speakers some variance in their speech. Audiobooks present an even do exhibit significantly less variation and better speaking char- greater challenge, with a more expressive reading style and dif- acteristics than speakers in found genres. By examining char- ferent character voices. Nevertheless, [2, 3, 4] have had suc- acteristics of f0, energy, speaking rate, articulation, NHR, jit- cess in building voices from audiobook data by identifying and ter, and shimmer in found genres and comparing these to tra- using the most neutral and highest-quality utterances.
    [Show full text]
  • 100000 Podcasts
    100,000 Podcasts: A Spoken English Document Corpus Ann Clifton Sravana Reddy Yongze Yu Spotify Spotify Spotify [email protected] [email protected] [email protected] Aasish Pappu Rezvaneh Rezapour∗ Hamed Bonab∗ Spotify University of Illinois University of Massachusetts [email protected] at Urbana-Champaign Amherst [email protected] [email protected] Maria Eskevich Gareth J. F. Jones Jussi Karlgren CLARIN ERIC Dublin City University Spotify [email protected] [email protected] [email protected] Ben Carterette Rosie Jones Spotify Spotify [email protected] [email protected] Abstract Podcasts are a large and growing repository of spoken audio. As an audio format, podcasts are more varied in style and production type than broadcast news, contain more genres than typi- cally studied in video data, and are more varied in style and format than previous corpora of conversations. When transcribed with automatic speech recognition they represent a noisy but fascinating collection of documents which can be studied through the lens of natural language processing, information retrieval, and linguistics. Paired with the audio files, they are also a re- source for speech processing and the study of paralinguistic, sociolinguistic, and acoustic aspects of the domain. We introduce the Spotify Podcast Dataset, a new corpus of 100,000 podcasts. We demonstrate the complexity of the domain with a case study of two tasks: (1) passage search and (2) summarization. This is orders of magnitude larger than previous speech corpora used for search and summarization. Our results show that the size and variability of this corpus opens up new avenues for research.
    [Show full text]
  • Syntactic Annotation of Spoken Utterances: a Case Study on the Czech Academic Corpus
    Syntactic annotation of spoken utterances: A case study on the Czech Academic Corpus Barbora Hladká and Zde ňka Urešová Charles University in Prague Institute of Formal and Applied Linguistics {hladka, uresova}@ufal.mff.cuni.cz text corpora, e.g. the Penn Treebank, Abstract the family of the Prague Dependency Tree- banks, the Tiger corpus for German, etc. Some Corpus annotation plays an important corpora contain a semantic annotation, such as role in linguistic analysis and computa- the Penn Treebank enriched by PropBank and tional processing of both written and Nombank, the Prague Dependency Treebank in spoken language. Syntactic annotation its highest layer, the Penn Chinese or the of spoken texts becomes clearly a topic the Korean Treebanks. The Penn Discourse of considerable interest nowadays, Treebank contains discourse annotation. driven by the desire to improve auto- It is desirable that syntactic (and higher) an- matic speech recognition systems by notation of spoken texts respects the written- incorporating syntax in the language text style as much as possible, for obvious rea- models, or to build language under- sons: data “compatibility”, reuse of tools etc. standing applications. Syntactic anno- A number of questions arise immediately: tation of both written and spoken texts How much experience and knowledge ac- in the Czech Academic Corpus was quired during the written text annotation can created thirty years ago when no other we apply to the spoken texts? Are the annota- (even annotated) corpus of spoken texts tion instructions applicable to transcriptions in has existed. We will discuss how much a straightforward way or some modifications relevant and inspiring this annotation is of them must be done? Can transcriptions be to the current frameworks of spoken annotated “as they are” or some transformation text annotation.
    [Show full text]
  • API Dict 1 API Dict
    API Dict 1 API Dict Get the list of available dictionaries Endpoint https:/ / api. pons. com/ v1/ dictionaries Building the request • We are expecting GET-Requests. • All other parameters have to be appended to the Endpoint-URL as request parameters. Name Type Description language Request-Parameter The language of the output (ISO 639-1 - two-letter codes). Supported languages are de,el,en,es,fr,it,pl,pt,ru,sl,tr,zh. Example using wget: wget -O - --no-check-certificate "https:/ / api. pons. com/ v1/ dictionaries?language=es" Response • The response is sent in JSON format (see below). • If an unsupported language was supplied, the response for the default language (english) will be delivered. Response content A list of available dictionaries: • key is the internal name of our dictionary. For two-language dictionaries, it should consist of the two languages ordered alphabetically. • simple_label is built this way: "[translated language1] «» [translated language2]" • directed_label should be used if there is a direction involved (for example when displaying search results). The direction is implied in the key (i.e. plde means pl » de). This applies only to some languages (see example - you could not use simple_label here) • languages is a list containing the languages of the dictionary. Please note that some dictionaries may have only one language (at the time of writing: dede, dedx). Example: [ { "key": "depl", "simple_label": "niemiecki «» polski", "directed_label": { "depl": "niemiecki » polski", "plde": "polski » niemiecki" }, "languages": [ "de", "pl" ] API Dict 2 }, [...] ] Query dictionary Endpoint https:/ / api. pons. com/ v1/ dictionary Building the request • We are expecting GET-Requests. • The request has to contain the credential (secret) in an HTTP-Header.
    [Show full text]
  • Named Entity Recognition in Question Answering of Speech Data
    Proceedings of the Australasian Language Technology Workshop 2007, pages 57-65 Named Entity Recognition in Question Answering of Speech Data Diego Molla´ Menno van Zaanen Steve Cassidy Division of ICS Department of Computing Macquarie University North Ryde Australia {diego,menno,cassidy}@ics.mq.edu.au Abstract wordings of the same information). The system uses symbolic algorithms to find exact answers to ques- Question answering on speech transcripts tions in large document collections. (QAst) is a pilot track of the CLEF com- The design and implementation of the An- petition. In this paper we present our con- swerFinder system has been driven by requirements tribution to QAst, which is centred on a that the system should be easy to configure, extend, study of Named Entity (NE) recognition on and, therefore, port to new domains. To measure speech transcripts, and how it impacts on the success of the implementation of AnswerFinder the accuracy of the final question answering in these respects, we decided to participate in the system. We have ported AFNER, the NE CLEF 2007 pilot task of question answering on recogniser of the AnswerFinder question- speech transcripts (QAst). The task in this compe- answering project, to the set of answer types tition is different from that for which AnswerFinder expected in the QAst track. AFNER uses a was originally designed and provides a good test of combination of regular expressions, lists of portability to new domains. names (gazetteers) and machine learning to The current CLEF pilot track QAst presents an find NeWS in the data. The machine learn- interesting and challenging new application of ques- ing component was trained on a develop- tion answering.
    [Show full text]