QASR: QCRI Aljazeera Speech Resource--A Large Scale Annotated Arabic Speech Corpus

Total Page:16

File Type:pdf, Size:1020Kb

QASR: QCRI Aljazeera Speech Resource--A Large Scale Annotated Arabic Speech Corpus QASR: QCRI Aljazeera Speech Resource A Large Scale Annotated Arabic Speech Corpus Hamdy Mubarak,1 Amir Hussein,2 Shammur Absar Chowdhury,1 and Ahmed Ali1 1 Qatar Computing Research Institute, HBKU, Doha, Qatar 2 Kanari AI, California, USA [email protected], https://arabicspeech.org/ Abstract et al., 2016); and more recently unsupervised (Valk and Alumae¨ , 2020; Wang et al., 2021) transcrip- We introduce the largest transcribed Arabic tion. This work enables to either reduce Word Error speech corpus, QASR1, collected from the broadcast domain. This multi-dialect speech Rate (WER) considerably or extract metadata from dataset contains 2; 000 hours of speech sam- speech: dialect-identification (Shon et al., 2020); pled at 16kHz crawled from Aljazeera news speaker-identification (Shon et al., 2019); and code- channel. The dataset is released with lightly switching (Chowdhury et al., 2020b, 2021). supervised transcriptions, aligned with the Natural Language Processing (NLP), on the audio segments. Unlike previous datasets, other hand values large amount of textual infor- QASR contains linguistically motivated seg- mation for designing experiments. NLP research mentation, punctuation, speaker information for Arabic has achieved a milestone in the last few among others. QASR is suitable for train- ing and evaluating speech recognition sys- years in morphological disambiguation, Named En- tems, acoustics- and/or linguistics- based Ara- tity Recognition (NER) and diacritization (Pasha bic dialect identification, punctuation restora- et al., 2014; Abdelali et al., 2016; Mubarak et al., tion, speaker identification, speaker linking, 2019). The NLP stack for Modern Standard Ara- and potentially other NLP modules for spoken bic (MSA) has reached very high performance in data. In addition to QASR transcription, we re- many tasks. With the rise of Dialectal Arabic (DA) 130 lease a dataset of M words to aid in design- content online, more resources and models have ing and training a better language model. We show that end-to-end automatic speech recog- been built to study DA textual dialect identification nition trained on QASR reports a competi- (Abdul-Mageed et al., 2020; Samih et al., 2017). tive word error rate compared to the previous Our objective is to release the first Arabic speech MGB-2 corpus. We report baseline results for and NLP corpus to study spoken MSA and DA. downstream natural language processing tasks This is to enable empirical evaluation of learning such as named entity recognition using speech more than the word sequence from the speech. In transcript. We also report the first baseline for our view, existing speech and NLP corpora are Arabic punctuation restoration. We make the missing the link between the two different modali- corpus available for the research community. ties. Speech poses unique challenges such as dis- 1 Introduction fluency (Pravin and Palanivelan, 2021), overlap arXiv:2106.13000v1 [cs.CL] 24 Jun 2021 speech (Tripathi et al., 2020; Chowdhury et al., Research on Automatic Speech Recognition (ASR) 2019), hesitation (Wottawa et al., 2020; Chowd- has attracted a lot of attention in recent years (Chiu hury et al., 2017), and code-switching (Du et al., et al., 2018; Watanabe et al., 2018). Such success 2021; Chowdhury et al., 2021). These challenges has brought remarkable improvements in reaching are often overlooked when it comes to NLP tasks, human-level performance (Xiong et al., 2016; Saon since they are not present in typical text data. et al., 2017; Hussein et al., 2021). This has been In this paper, we create and release2 the largest achieved by the development of large spoken cor- corpus for transcribed Arabic speech. It comprises pora: supervised (Panayotov et al., 2015; Ardila of 2; 000 hours of speech data with lightly super- et al., 2019); semi-supervised (Bell et al., 2015; Ali vised transcriptions. Our contributions are: (i) 1 QASR Qå¯ in Arabic means “Palace”. The acronym 2Data can be obtained from: stands for: QCRI Aljazeera Speech Resource. https://arabicspeech.org/qasr aligning the transcription with the corresponding Table 1: Comparison between MGB-2 vs QASR. audio segments including punctuation for build- MGB-2 QASR ing ASR systems; (ii) providing semi-supervised Hours 1; 200 2; 000 speaker identification and speaker linking per audio Dialects MSA, GLF, LEV, NOR, EGY segments; (iii) releasing baseline results for acous- Influenced by Linguistically and Segmentation silence and acoustically tic and linguistic Arabic dialect identification and segment length motivated iv Lightly Lightly punctuation restoration; ( ) adding a new layer of Transcription annotation in the publicly available MGB-2 testset, supervised supervised Punctuation – for evaluating NER for speech transcription; (v) Code-Switching – sharing code-switching data between Arabic and Possible Turn-Ending – foreign languages for speech and text; and finally, Speaker Names (+ normalised names) (2000 speakers) Speaker Gender – (vi) releasing more than 130M words for Language covers ≈82% data Manually annotated Model (LM). Speaker Country – in testset We believe that providing the research com- Manually annotated NER – munity with access to multi-dialectal speech data in testset along with the corresponding NLP features will fos- ter open research in several areas, such as the anal- bic dataset, from the CommonVoice project, pro- ysis of speech and NLP processing jointly. Here, vides 49 hours of modern standard Arabic (MSA) we build models and share the baseline results for speech data.4 all of the aforementioned tasks. Unlike MGB-2, QASR dataset is the largest 1.1 Related work multi-dialectal corpus with linguistically motivated segmentation. The dataset includes multi-layer in- The CallHome task within the NIST benchmark formation that aids both speech and NLP research evaluations framework (Pallett, 2003), released one community. QASR is the first speech corpora to of the first transcribed Arabic dialect dataset. Over provide resources for benchmarking NER, punc- years, NIST evaluations provided with more dialec- tuation restoration systems. For close comparison tal - mainly in Egyptian and Levantine dialects, as between MGB-2 vs QASR, see Table1. part of language recognition evaluation campaign. Projects such as GALE and TRANSTAC (Olive 2 Corpus Creation et al., 2011) program, released more than 251 hours of Arabic data, including the first spoken Iraqi di- 2.1 Data Collection alect among others. These datasets exposed the We obtained Aljazeera Arabic news channel’s research community to the challenges of spoken archive (henceforth AJ), spanning over 11 years dialectal Arabic and motivated to design competi- from 2004 until 2015. It contains more than 4; 000 tion to handle dialect identification, dialectal ASR episodes from 19 different programs. These pro- among others (see Ali et al.(2021) for details). grams cover different domains like politics, society, The following datasets are released from the economy, sports, science, etc. For each episode, we Multi-Genre Broadcast MGB challenge: (i) MGB- have the following: (i) audio sampled at 16KHz; 2 (Ali et al., 2016) – this dataset is the first mile- (ii) manual transcription, the textual transcriptions stone towards designing the first large scale contin- contained no timing information. The quality of uous speech recognition for Arabic language. The the transcription varied significantly; the most chal- corpus contains a total of 1; 200 hours of speech lenging were conversational programs in which with lightly supervised transcriptions and is col- overlapping speech and dialectal usage was more lected from Aljazeera Arabic news channel span frequent; and finally (iii) some metadata. over many years. (ii) MGB-3 (Ali et al., 2017)– For better evaluation of the QASR corpus, we focused on only Egyptian Arabic broadcast data reused the publicly available MGB-2 (Ali et al., comprises of 16 hours. (iii) MGB-5 (Ali et al., 2016) testset as it has been manually revised, com- 2019) – consists of 13 hours of Moroccan Arabic ing from the same channel, thus making this testset speech data. In addition, the CommonVoice3 Ara- ideal to evaluate the QASR corpus. It is worth not- ing that we ensure that the MGB-2 dev/test sets 3https://commonvoice.mozilla.org/en/ datasets 4Reported on June 2021. Item Description Barack Obama/President of USA, Barck Obama Hours 10 Episodes 17. Average episode duration = 34 min (typo), etc.). The list of guest speakers and episode Segments 8; 014 topics are not comprehensive, with many spelling Words 69; 644. Unique words = 15; 754 mistakes in the majority of metadata field names Speakers 111 Males 87 (78%) and attributes. To overcome these challenges, we Females 13 (11%) applied several iterations of automatic parsing and Variety MSA: 78%:, Dialectal Arabic: 22% Countries Top 5 countries are: (based on dialectal segments) extraction followed by manual verification and stan- EG: 18%, SY: 11%, PS: 11%, DZ: 8%, SD: 7% dardization. Genre Top 5 topics are: Politics: 69%, Society: 9%, Economy: 8%, Culture/Art: 4%, Health: 3% Table 2: Description of the updated MGB-2 testset Item Count Notes Hours 2; 041 Episodes 3; 545 Average episode duration = 32 min. Segments 1:6M . Average segment duration = 4 sec 84% of segments are [2-6] sec . Average segment len = 9 words 80% of segments have [5-11] words Words 14:3M Unique words = 360K Speakers 27; 977 Unique speakers = 11; 092 Males 1; 171 1:2M segments (69%) Females 68 99K segments (6%) Table 3: QASR Corpus Statistics are not included
Recommended publications
  • Adapting to Trends in Language Resource Development: a Progress
    Adapting to Trends in Language Resource Development: A Progress Report on LDC Activities Christopher Cieri, Mark Liberman University of Pennsylvania, Linguistic Data Consortium 3600 Market Street, Suite 810, Philadelphia PA. 19104, USA E-mail: {ccieri,myl} AT ldc.upenn.edu Abstract This paper describes changing needs among the communities that exploit language resources and recent LDC activities and publications that support those needs by providing greater volumes of data and associated resources in a growing inventory of languages with ever more sophisticated annotation. Specifically, it covers the evolving role of data centers with specific emphasis on the LDC, the publications released by the LDC in the two years since our last report and the sponsored research programs that provide LRs initially to participants in those programs but eventually to the larger HLT research communities and beyond. creators of tools, specifications and best practices as well 1. Introduction as databases. The language resource landscape over the past two years At least in the case of LDC, the role of the data may be characterized by what has changed and what has center has grown organically, generally in response to remained constant. Constant are the growing demands stated need but occasionally in anticipation of it. LDC’s for an ever-increasing body of language resources in a role was initially that of a specialized publisher of LRs growing number of human languages with increasingly with the additional responsibility to archive and protect sophisticated annotation to satisfy an ever-expanding list published resources to guarantee their availability of the of user communities. Changing are the relative long term.
    [Show full text]
  • Characteristics of Text-To-Speech and Other Corpora
    Characteristics of Text-to-Speech and Other Corpora Erica Cooper, Emily Li, Julia Hirschberg Columbia University, USA [email protected], [email protected], [email protected] Abstract “standard” TTS speaking style, whether other forms of profes- sional and non-professional speech differ substantially from the Extensive TTS corpora exist for commercial systems cre- TTS style, and which features are most salient in differentiating ated for high-resource languages such as Mandarin, English, the speech genres. and Japanese. Speakers recorded for these corpora are typically instructed to maintain constant f0, energy, and speaking rate and 2. Related Work are recorded in ideal acoustic environments, producing clean, consistent audio. We have been developing TTS systems from TTS speakers are typically instructed to speak as consistently “found” data collected for other purposes (e.g. training ASR as possible, without varying their voice quality, speaking style, systems) or available on the web (e.g. news broadcasts, au- pitch, volume, or tempo significantly [1]. This is different from diobooks) to produce TTS systems for low-resource languages other forms of professional speech in that even with the rela- (LRLs) which do not currently have expensive, commercial sys- tively neutral content of broadcast news, anchors will still have tems. This study investigates whether traditional TTS speakers some variance in their speech. Audiobooks present an even do exhibit significantly less variation and better speaking char- greater challenge, with a more expressive reading style and dif- acteristics than speakers in found genres. By examining char- ferent character voices. Nevertheless, [2, 3, 4] have had suc- acteristics of f0, energy, speaking rate, articulation, NHR, jit- cess in building voices from audiobook data by identifying and ter, and shimmer in found genres and comparing these to tra- using the most neutral and highest-quality utterances.
    [Show full text]
  • 100000 Podcasts
    100,000 Podcasts: A Spoken English Document Corpus Ann Clifton Sravana Reddy Yongze Yu Spotify Spotify Spotify [email protected] [email protected] [email protected] Aasish Pappu Rezvaneh Rezapour∗ Hamed Bonab∗ Spotify University of Illinois University of Massachusetts [email protected] at Urbana-Champaign Amherst [email protected] [email protected] Maria Eskevich Gareth J. F. Jones Jussi Karlgren CLARIN ERIC Dublin City University Spotify [email protected] [email protected] [email protected] Ben Carterette Rosie Jones Spotify Spotify [email protected] [email protected] Abstract Podcasts are a large and growing repository of spoken audio. As an audio format, podcasts are more varied in style and production type than broadcast news, contain more genres than typi- cally studied in video data, and are more varied in style and format than previous corpora of conversations. When transcribed with automatic speech recognition they represent a noisy but fascinating collection of documents which can be studied through the lens of natural language processing, information retrieval, and linguistics. Paired with the audio files, they are also a re- source for speech processing and the study of paralinguistic, sociolinguistic, and acoustic aspects of the domain. We introduce the Spotify Podcast Dataset, a new corpus of 100,000 podcasts. We demonstrate the complexity of the domain with a case study of two tasks: (1) passage search and (2) summarization. This is orders of magnitude larger than previous speech corpora used for search and summarization. Our results show that the size and variability of this corpus opens up new avenues for research.
    [Show full text]
  • Syntactic Annotation of Spoken Utterances: a Case Study on the Czech Academic Corpus
    Syntactic annotation of spoken utterances: A case study on the Czech Academic Corpus Barbora Hladká and Zde ňka Urešová Charles University in Prague Institute of Formal and Applied Linguistics {hladka, uresova}@ufal.mff.cuni.cz text corpora, e.g. the Penn Treebank, Abstract the family of the Prague Dependency Tree- banks, the Tiger corpus for German, etc. Some Corpus annotation plays an important corpora contain a semantic annotation, such as role in linguistic analysis and computa- the Penn Treebank enriched by PropBank and tional processing of both written and Nombank, the Prague Dependency Treebank in spoken language. Syntactic annotation its highest layer, the Penn Chinese or the of spoken texts becomes clearly a topic the Korean Treebanks. The Penn Discourse of considerable interest nowadays, Treebank contains discourse annotation. driven by the desire to improve auto- It is desirable that syntactic (and higher) an- matic speech recognition systems by notation of spoken texts respects the written- incorporating syntax in the language text style as much as possible, for obvious rea- models, or to build language under- sons: data “compatibility”, reuse of tools etc. standing applications. Syntactic anno- A number of questions arise immediately: tation of both written and spoken texts How much experience and knowledge ac- in the Czech Academic Corpus was quired during the written text annotation can created thirty years ago when no other we apply to the spoken texts? Are the annota- (even annotated) corpus of spoken texts tion instructions applicable to transcriptions in has existed. We will discuss how much a straightforward way or some modifications relevant and inspiring this annotation is of them must be done? Can transcriptions be to the current frameworks of spoken annotated “as they are” or some transformation text annotation.
    [Show full text]
  • Named Entity Recognition in Question Answering of Speech Data
    Proceedings of the Australasian Language Technology Workshop 2007, pages 57-65 Named Entity Recognition in Question Answering of Speech Data Diego Molla´ Menno van Zaanen Steve Cassidy Division of ICS Department of Computing Macquarie University North Ryde Australia {diego,menno,cassidy}@ics.mq.edu.au Abstract wordings of the same information). The system uses symbolic algorithms to find exact answers to ques- Question answering on speech transcripts tions in large document collections. (QAst) is a pilot track of the CLEF com- The design and implementation of the An- petition. In this paper we present our con- swerFinder system has been driven by requirements tribution to QAst, which is centred on a that the system should be easy to configure, extend, study of Named Entity (NE) recognition on and, therefore, port to new domains. To measure speech transcripts, and how it impacts on the success of the implementation of AnswerFinder the accuracy of the final question answering in these respects, we decided to participate in the system. We have ported AFNER, the NE CLEF 2007 pilot task of question answering on recogniser of the AnswerFinder question- speech transcripts (QAst). The task in this compe- answering project, to the set of answer types tition is different from that for which AnswerFinder expected in the QAst track. AFNER uses a was originally designed and provides a good test of combination of regular expressions, lists of portability to new domains. names (gazetteers) and machine learning to The current CLEF pilot track QAst presents an find NeWS in the data. The machine learn- interesting and challenging new application of ques- ing component was trained on a develop- tion answering.
    [Show full text]
  • Reducing Out-Of-Vocabulary in Morphology to Improve the Accuracy in Arabic Dialects Speech Recognition
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of Birmingham Research Archive, E-theses Repository Reducing Out-of-Vocabulary in Morphology to Improve the Accuracy in Arabic Dialects Speech Recognition by Khalid Abdulrahman Almeman A thesis submitted to The University of Birmingham for the degree of Doctor of Philosophy School of Computer Science The University of Birmingham March 2015 University of Birmingham Research Archive e-theses repository This unpublished thesis/dissertation is copyright of the author and/or third parties. The intellectual property rights of the author or third parties in respect of this work are as defined by The Copyright Designs and Patents Act 1988 or as modified by any successor legislation. Any use made of information contained in this thesis/dissertation must be in accordance with that legislation and must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the permission of the copyright holder. Abstract This thesis has two aims: developing resources for Arabic dialects and improving the speech recognition of Arabic dialects. Two important components are considered: Pro- nunciation Dictionary (PD) and Language Model (LM). Six parts are involved, which relate to finding and evaluating dialects resources and improving the performance of sys- tems for the speech recognition of dialects. Three resources are built and evaluated: one tool and two corpora. The methodology that was used for building the multi-dialect morphology analyser involves the proposal and evaluation of linguistic and statistic bases. We obtained an overall accuracy of 94%. The dialect text corpora have four sub-dialects, with more than 50 million tokens.
    [Show full text]
  • Detection of Imperative and Declarative Question-Answer Pairs in Email Conversations
    Detection of Imperative and Declarative Question-Answer Pairs in Email Conversations Helen Kwong Neil Yorke-Smith Stanford University, USA SRI International, USA [email protected] [email protected] Abstract is part of an effort to endow an email client with a range of usable smart features by means of AI technology. Question-answer pairs extracted from email threads Prior work that studies the problems of question identifi- can help construct summaries of the thread, as well cation and question-answer pairing in email threads has fo- as inform semantic-based assistance with email. cused on interrogative questions, such as “Which movies do Previous work dedicated to email threads extracts you like?” However, questions are often expressed in non- only questions in interrogative form. We extend the interrogative forms: declarative questions such as “I was scope of question and answer detection and pair- wondering if you are free today”, and imperative questions ing to encompass also questions in imperative and such as “Tell me the cost of the tickets”. Our sampling from declarative forms, and to operate at sentence-level the well-known Enron email corpus finds about one in five fidelity. Building on prior work, our methods are questions are non-interrogative in form. Further, previous based on learned models over a set of features that work for email threads has operated at the paragraph level, include the content, context, and structure of email thus potentially missing more fine-grained conversation. threads. For two large email corpora, we show We propose an approach to extract question-answer pairs that our methods balance precision and recall in ex- that includes imperative and declarative questions.
    [Show full text]
  • Low-Cost Customized Speech Corpus Creation for Speech Technology Applications
    Low-cost Customized Speech Corpus Creation for Speech Technology Applications Kazuaki Maeda, Christopher Cieri, Kevin Walker Linguistic Data Consortium University of Pennsylvania 3600 Market St., Suite 810 Philadelphia, 19104 PA, U.S.A. fmaeda, ccieri, [email protected] Abstract Speech technology applications, such as speech recognition, speech synthesis, and speech dialog systems, often require corpora based on highly customized specifications. Existing corpora available to the community, such as TIMIT and other corpora distributed by LDC and ELDA, do not always meet the requirements of such applications. In such cases, the developers need to create their own corpora. The creation of a highly customized speech corpus, however, could be a very expensive and time-consuming task, especially for small organizations. It requires multidisciplinary expertise in linguistics, management and engineering as it involves subtasks such as the corpus design, human subject recruitment, recording, quality assurance, and in some cases, segmentation, transcription and annotation. This paper describes LDC's recent involvement in the creation of a low-cost yet highly-customized speech corpus for a commercial organization under a novel data creation and licensing model, which benefits both the particular data requester and the general linguistic data user community. 1. Introduction sented each of four regional groups, three age groups and The Linguistic Data Consortium (LDC) is a non-profit or- two gender groups. To meet this first challenge, we con- ganization, whose primary mission is to support educa- ducted a recruitment effort to meet these demographic re- tion, research and technology development in language- quirements. In order to ensure that recordings are consis- related disciplines.
    [Show full text]
  • Arabic Language Modeling with Stem-Derived Morphemes for Automatic Speech Recognition
    ARABIC LANGUAGE MODELING WITH STEM-DERIVED MORPHEMES FOR AUTOMATIC SPEECH RECOGNITION DISSERTATION Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Ilana Heintz, B.A., M.A. Graduate Program in Linguistics The Ohio State University 2010 Dissertation Committee: Prof. Chris Brew, Co-Adviser Prof. Eric Fosler-Lussier, Co-Adviser Prof. Michael White c Copyright by Ilana Heintz 2010 ABSTRACT The goal of this dissertation is to introduce a method for deriving morphemes from Arabic words using stem patterns, a feature of Arabic morphology. The motivations are three-fold: modeling with morphemes rather than words should help address the out-of- vocabulary problem; working with stem patterns should prove to be a cross-dialectally valid method for deriving morphemes using a small amount of linguistic knowledge; and the stem patterns should allow for the prediction of short vowel sequences that are missing from the text. The out-of-vocabulary problem is acute in Modern Standard Arabic due to its rich morphology, including a large inventory of inflectional affixes and clitics that combine in many ways to increase the rate of vocabulary growth. The problem of creating tools that work across dialects is challenging due to the many differences between regional dialects and formal Arabic, and because of the lack of text resources on which to train natural language processing (NLP) tools. The short vowels, while missing from standard orthography, provide information that is crucial to both acoustic modeling and grammatical inference, and therefore must be inserted into the text to train the most predictive NLP models.
    [Show full text]
  • Speech to Text to Semantics: a Sequence-To-Sequence System for Spoken Language Understanding
    Speech to Text to Semantics: A Sequence-to-Sequence System for Spoken Language Understanding John Dodson A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science University of Washington 2020 Reading Committee: Gina-Anne Levow, Chair Philip Borawski Program Authorized to Offer Degree: Linguistics c Copyright 2020 John Dodson University of Washington Abstract Speech to Text to Semantics: A Sequence-to-Sequence System for Spoken Language Understanding John Dodson Chair of the Supervisory Committee: Gina-Anne Levow Linguistics Spoken language understanding entails both the automatic transcription of a speech utter- ance and the identification of one or more semantic concepts being conveyed by the utterance. Traditionally these systems are domain specific and target industries like travel, entertain- ment, and home automation. As such, many approaches to spoken language understanding solve the task of filling predefined semantic slots, and cannot generalize to identify arbitrary semantic roles. This thesis addresses the broader question of how to extract predicate-argument frames from a transcribed speech utterance. I describe a sequence-to-sequence system for spoken language understanding through shallow semantic parsing. Built using a modification of the OpenSeq2Seq toolkit, the system is able to perform speech recognition and semantic parsing in a single end-to-end flow. The proposed system is extensible and easy to use, allowing for fast iteration on system parameters and model architectures. The system is evaluated through two experiments. The first experiment performs a speech to text to semantics transformation and uses n-best language model rescoring to generate the best transcription sequence.
    [Show full text]
  • Data Available for Bolt Performers
    DATA AVAILABLE FOR BOLT PERFORMERS Data Created During BOLT; corpora automatically distributed to performers. Contact [email protected] to obtain any missing corpora Source Language cmn data Where arz data volume Relevant/Known volume (words unless eng data (arz = Egyptian (words otherwise volume Arabic; cmn = unless specified; 1 (words unless Release Mandarin Chinese; Genre Where otherwise char = 1.5 otherwise Catalog ID Title Date Type Description eng = English) Relevant/Known specified) words) specified) Discussion forums sample for eliciation of feedback on format, LDC2011E115 BOLT ‐ Sample Discussion Forums 12/22/2011 Source structure, etc. arz, cmn, eng discussion forum 67815 106130 142491 BOLT ‐ Phase 1 Discussion Forums Source Data R1 LDC2012E04 V2 3/29/2012 Source Discussion forums source data arz, cmn, eng discussion forum 33871338 36244922 29658002 LDC2012E16 BOLT ‐ Phase 1 Discussion Forums Source Data R2 3/22/2012 Source Discussion forum source data arz, cmn, eng discussion forum 118519987 264314806 273078669 LDC2012E21 BOLT ‐ Phase 1 Discussion Forums Source Data R3 4/24/2012 Source Discussion forums source data arz, cmn, eng discussion forum 127832646 279763913 282588862 LDC2012E54 BOLT ‐ Phase 1 Discussion Forums Source Data R4 5/31/2012 Source Discussion forums source data arz, cmn, eng discussion forum 368199350 838056761 676989452 List of threads rejected during triage LDC2012E62 BOLT ‐ Phase 1 Rejected Training Data Thread IDs 6/1/2012 Source for BOLT translation training data n/a discussion forum n/a n/a n/a List of source documents for IR LDC2012E82 BOLT Phase 1 IR Eval Source Data Document List 6/29/2012 Source evaluation arz, cmn, eng discussion forum 400036669 400168661 400219116 BOLT Phase 2 IR Source Data Document List and Discussion forum source documents LDC2013E08 Sample Query 1/31/2012 Source for support of P2 IR arz discussion forum 616719471 n/a n/a BOLT Phase 2 SMS and Chat Sample Source Data SMS/chat sample for eliciation of LDC2013E10 V1.1 3/5/2013 Source feedback on format, structure, etc.
    [Show full text]
  • The Workshop Programme
    The Workshop Programme Monday, May 26 14:30 -15:00 Opening Nancy Ide and Adam Meyers 15:00 -15:30 SIGANN Shared Corpus Working Group Report Adam Meyers 15:30 -16:00 Discussion: SIGANN Shared Corpus Task 16:00 -16:30 Coffee break 16:30 -17:00 Towards Best Practices for Linguistic Annotation Nancy Ide, Sameer Pradhan, Keith Suderman 17:00 -18:00 Discussion: Annotation Best Practices 18:00 -19:00 Open Discussion and SIGANN Planning Tuesday, May 27 09:00 -09:10 Opening Nancy Ide and Adam Meyers 09:10 -09:50 From structure to interpretation: A double-layered annotation for event factuality Roser Saurí and James Pustejovsky 09:50 -10:30 An Extensible Compositional Semantics for Temporal Annotation Harry Bunt, Chwhynny Overbeeke 10:30 -11:00 Coffee break 11:00 -11:40 Using Treebank, Dictionaries and GLARF to Improve NomBank Annotation Adam Meyers 11:40 -12:20 A Dictionary-based Model for Morpho-Syntactic Annotation Cvetana Krstev, Svetla Koeva, Du!ko Vitas 12:20 -12:40 Multiple Purpose Annotation using SLAT - Segment and Link-based Annotation Tool (DEMO) Masaki Noguchi, Kenta Miyoshi, Takenobu Tokunaga, Ryu Iida, Mamoru Komachi, Kentaro Inui 12:40 -14:30 Lunch 14:30 -15:10 Using inheritance and coreness sets to improve a verb lexicon harvested from FrameNet Mark McConville and Myroslava O. Dzikovska 15:10 -15:50 An Entailment-based Approach to Semantic Role Annotation Voula Gotsoulia 16:00 -16:30 Coffee break 16:30 -16:50 A French Corpus Annotated for Multiword Expressions with Adverbial Function Eric Laporte, Takuya Nakamura, Stavroula Voyatzi
    [Show full text]