Pre-Training Via Leveraging Assisting Languages for Neural Machine Translation

Total Page:16

File Type:pdf, Size:1020Kb

Pre-Training Via Leveraging Assisting Languages for Neural Machine Translation Pre-training via Leveraging Assisting Languages for Neural Machine Translation Haiyue Song1 Raj Dabre2 Zhuoyuan Mao1 Fei Cheng1 Sadao Kurohashi1 Eiichiro Sumita2 1Kyoto University, Kyoto, Japan 2National Institute of Information and Communications Technology, Kyoto, Japan fsong, feicheng, zhuoyuanmao, [email protected], fraj.dabre, [email protected] Abstract Pre-training using models like BERT (Devlin et al., 2018) have led to new state-of-the-art re- Sequence-to-sequence (S2S) pre-training us- ing large monolingual data is known to im- sults in text understanding. However, BERT-like prove performance for various S2S NLP tasks. sequence models were not designed to be used However, large monolingual corpora might not for NMT which is sequence to sequence (S2S). always be available for the languages of inter- Song et al.(2019) recently proposed MASS, a S2S est (LOI). Thus, we propose to exploit mono- specific pre-training task for NMT and obtained lingual corpora of other languages to comple- new state-of-the-art results in low-resource settings. ment the scarcity of monolingual corpora for MASS assumes that a large amount of monolin- the LOI. We utilize script mapping (Chinese gual data is available for the languages involved to Japanese) to increase the similarity (number of cognates) between the monolingual corpora but some language pairs may lack both parallel and of helping languages and LOI. An empirical monolingual corpora and are “truly low-resource” case study of low-resource Japanese–English and challenging. neural machine translation (NMT) reveals that Fortunately, languages are not isolated and often leveraging large Chinese and French mono- belong to “language families” where they have sim- lingual corpora can help overcome the short- ilar orthography (written script; shared cognates) age of Japanese and English monolingual cor- or similar grammar or both. Motivated by this, in pora, respectively, for S2S pre-training. Using only Chinese and French monolingual corpora, this paper we hypothesize that we should be able to we were able to improve Japanese–English leverage large monolingual corpora of other assist- translation quality by up to 8.5 BLEU in low- ing languages to help the monolingual pre-training resource scenarios. of NMT models for the languages of interest (LOI) that may lack monolingual corpora. Wherever pos- 1 Introduction sible, we subject the pre-training corpora to script Neural Machine Translation (NMT) (Sutskever mapping which should help minimize the vocabu- et al., 2014; Bahdanau et al., 2015) is known to lary and distribution differences, respectively, be- give state-of-the-art (SOTA) translations for lan- tween the pre-training, main training (fine-tuning) guage pairs with an abundance of parallel corpora. and testing time datasets. This should help the However, most language pairs are resource poor already consistent pre-training and fine-tuning ob- (Russian–Japanese, Marathi–English) as they lack jectives leverage the data much better and thereby, large parallel corpora and the lack of bilingual possibly, boost translation quality. training data can be compensated by by monolin- To this end, we experiment with ASPEC gual corpora. Although it is possible to utilise the Japanese–English translation in a variety of low- popular back-translation method (Sennrich et al., resource settings for the Japanese–English parallel 2016a), it is time-consuming to backtranslate a corpora. Our experiments reveal that while it’s large amount of monolingual data. Furthermore, possible to leverage unrelated languages for pre- poor quality backtranslated data tends to be of lit- training, using related languages is extremely im- tle help. Recently, another approach has gained portant. We utilized Chinese to Japanese script popularity where the NMT model is pre-trained mapping to maximize the similarities between through tasks that only require monolingual data the assisting languages (Chinese and French) and (Song et al., 2019; Qi et al., 2018). the languages of interest (Japanese and English). 279 Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 279–285 July 5 - July 10, 2020. c 2020 Association for Computational Linguistics We show that only using monolingual corpora of Monolingual data Chinese and French for pre-training can improve Assisting Target Japanese–English translation quality by up to 8.5 languages languages Data Parallel data Mapping BLEU. Selection Target Mixed data The contributions of our work are as follows: languages 1. Leveraging assisting languages: We give a novel study of leveraging monolingual corpora Pre-train Fine-tune Pre-trained NMT model of related and unrelated languages for NMT model pre-training. Figure 1: An overview of our proposed method consist- 2. Empirical evaluation: We make a comparison ing of script mapping, data selection, pre-training and of existing and proposed techniques in a variety of fine-tuning corpora settings to verify our hypotheses. the languages of interest (LOI). The framework of our approach is shown in Figure1 which consists 2 Related work of script mapping, data selection, pre-training and Our research is at the intersection of works on fine-tuning. monolingual pre-training for NMT and leveraging 3.1 Data Pre-processing multilingualism for low-resource language transla- tion. Blindly pre-training a NMT model on vast amounts Pre-training has enjoyed great success in other of monolingual data belonging to the assisting lan- NLP tasks with the development of methods like guages and LOI might improve translation quality BERT (Devlin et al., 2018). Song et al.(2019) slightly. However, divergences between the lan- recently proposed MASS, a new state-of-the-art guages, especially their scripts (Hermjakob et al., NMT pre-training task that jointly trains the en- 2018) and also the distributions of data between coder and the decoder. Our approach builds on the different training phases is known to impact the initial idea of MASS, but focuses on complement- final result. Motivated by past works on using re- ing the potential scarcity of monolingual corpora lated languages (Dabre et al., 2017), orthography for the languages of interest using relatively larger mapping/unification (Hermjakob et al., 2018; Chu monolingual corpora of other (assisting) languages. et al., 2012) and data selection for MT (Axelrod On the other hand, leveraging multilingualism et al., 2011), we propose to improve the efficacy of involves cross-lingual transfer (Zoph et al., 2016) pre-training by reducing data and language diver- which solves the low-resource issue by using data gence. from different language pairs. Dabre et al.(2017) 3.1.1 Script Mapping showed the importance of transfer learning between Previous research has shown that enforcing shared languages belonging to the same language family orthography (Sennrich et al., 2016b; Dabre et al., but corpora might not always be available in a re- 2015) has a strong positive impact on translation. lated language. A mapping between Chinese and Following this, we propose to leverage existing Japanese characters (Chu et al., 2012) was shown to script mapping rules1 or script unification mech- be useful for Chinese–Japanese dictionary construc- anisms to, at the very least, maximize the possi- tion (Dabre et al., 2015). Mappings between scripts bility of cognate sharing and thereby bringing the or unification of scripts (Hermjakob et al., 2018) assisting language closer to the LOI. This should can artificially increase the similarity between lan- strongly impact languages such as Hindi, Punjabi guages which motivates most of our work. and Bengali belonging to the same family but writ- 3 Proposed Method: Using Assisting ten using different scripts. Languages For languages such as Korean, Chinese and Japanese there may exist a many to many mapping We propose a novel monolingual pre-training between their scripts. Thus, incorrect mapping of method for NMT which leverages monolingual 1Transliteration is another option but transliteration sys- corpora of assisting languages to overcome the tems are relatively unreliable compared to handcrafted rule scarcity of monolingual and parallel corpora of tables. 280 characters (basic unit of a script) might produce Algorithm 1: Length Distribution Data Se- wrong words and reduce cognate sharing. We pro- lection pose two solutions to address this. Input :TargetFile, InputFile, 1. One-to-one mapping: Here we do not care SelectNum about word level information and map each charac- Output :SelectedLines ter in one language to its corresponding character 1 TargetDistribution fg; in another language. Here, we just select the first 2 CurrentDistribution fg; mapping in the mapping list. 3 SelectedLines fg; 2. Many-to-many mapping with LM scoring: 4 TargetNum = # of Lines in TargetFile; A more sophisticated solution is where for each 5 foreach Line 2 TargetFile do tokenized word-level segment in one language we 6 TargetD[len(Line)]+ = 1 ; enumerate all possible combinations of mapped 7 foreach Line 2 InputFile do characters and use a language model in the other 8 if language to select the character combination with CurrentD[len(Line)]=SelectNum < the highest score as the result. TargetD[len(Line)]=TargetNum 3.1.2 Note on Chinese–Japanese Scripts then 9 CurrentD[len(Line)]+ = 1; Japanese is written in Kanji which was borrowed 10 SelectedLines from China. Over time the written scripts have di- SelectedLines [ fLineg; verged and the pronunciations are naturally differ- ent but there are a significant number of cognates written in both languages. As such pre-training on Chinese should benefit translation involving ically the monolingual corpus). When selecting Japanese. Chu et al.(2012) created a mapping table monolingual data of languages of interest, we can between them which can be leveraged to further first calculate the length distribution of parallel data increase the number of cognates. as target distribution (the ratio of all lengths in T argetF ile) and we fill the length distribution by 3.1.3 Data Selection selecting sentences from monolingual data of same Often, the pre-training monolingual data and the language.
Recommended publications
  • A Method to Accommodate Backward Compatibility on the Learning Application-Based Transliteration to the Balinese Script
    (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 12, No. 6, 2021 A Method to Accommodate Backward Compatibility on the Learning Application-based Transliteration to the Balinese Script 1 3 4 Gede Indrawan , I Gede Nurhayata , Sariyasa I Ketut Paramarta2 Department of Computer Science Department of Balinese Language Education Universitas Pendidikan Ganesha (Undiksha) Universitas Pendidikan Ganesha (Undiksha) Singaraja, Indonesia Singaraja, Indonesia Abstract—This research proposed a method to accommodate transliteration rules (for short, the older rules) from The backward compatibility on the learning application-based Balinese Alphabet document 1 . It exposes the backward transliteration to the Balinese Script. The objective is to compatibility method to accommodate the standard accommodate the standard transliteration rules from the transliteration rules (for short, the standard rules) from the Balinese Language, Script, and Literature Advisory Agency. It is Balinese Language, Script, and Literature Advisory Agency considered as the main contribution since there has not been a [7]. This Bali Province government agency [4] carries out workaround in this research area. This multi-discipline guidance and formulates programs for the maintenance, study, collaboration work is one of the efforts to preserve digitally the development, and preservation of the Balinese Language, endangered Balinese local language knowledge in Indonesia. The Script, and Literature. proposed method covered two aspects, i.e. (1) Its backward compatibility allows for interoperability at a certain level with This study was conducted on the developed web-based the older transliteration rules; and (2) Breaking backward transliteration learning application, BaliScript, for further compatibility at a certain level is unavoidable since, for the same ubiquitous Balinese Language learning since the proposed aspect, there is a contradictory treatment between the standard method reusable for the mobile application [8], [9].
    [Show full text]
  • Effects and Mitigation of Out-Of-Vocabulary in Universal Language Models
    Journal of Information Processing Vol.29 490–503 (July 2021) [DOI: 10.2197/ipsjjip.29.490] Regular Paper Effects and Mitigation of Out-of-vocabulary in Universal Language Models Sangwhan Moon1,†1,a) Naoaki Okazaki1,b) Received: December 8, 2020, Accepted: April 2, 2021 Abstract: One of the most important recent natural language processing (NLP) trends is transfer learning – using representations from language models implemented through a neural network to perform other tasks. While transfer learning is a promising and robust method, downstream task performance in transfer learning depends on the robust- ness of the backbone model’s vocabulary, which in turn represents both the positive and negative characteristics of the corpus used to train it. With subword tokenization, out-of-vocabulary (OOV) is generally assumed to be a solved problem. Still, in languages with a large alphabet such as Chinese, Japanese, and Korean (CJK), this assumption does not hold. In our work, we demonstrate the adverse effects of OOV in the context of transfer learning in CJK languages, then propose a novel approach to maximize the utility of a pre-trained model suffering from OOV. Additionally, we further investigate the correlation of OOV to task performance and explore if and how mitigation can salvage a model with high OOV. Keywords: Natural language processing, Machine learning, Transfer learning, Language models to reduce the size of the vocabulary while increasing the robust- 1. Introduction ness against out-of-vocabulary (OOV) in downstream tasks. This Using a large-scale neural language model as a pre-trained is especially powerful when combined with transfer learning.
    [Show full text]
  • Child Readersв€™ Eye Movements in Reading Thai
    Vision Research 123 (2016) 8–19 Contents lists available at ScienceDirect Vision Research journal homepage: www.elsevier.com/locate/visres Child readers’ eye movements in reading Thai ⇑ Benjawan Kasisopa a, , Ronan G. Reilly a,b, Sudaporn Luksaneeyanawin c, Denis Burnham a a MARCS Institute for Brain, Behaviour, and Development, Western Sydney University, Australia b Department of Computer Science, Maynooth University, Ireland c Centre for Research in Speech and Language Processing, Chulalongkorn University, Thailand article info abstract Article history: It has recently been found that adult native readers of Thai, an alphabetic scriptio continua language, Received 30 July 2014 engage similar oculomotor patterns as readers of languages written with spaces between words; despite Received in revised form 1 July 2015 the lack of inter-word spaces, first and last characters of a word appear to guide optimal placement of Accepted 23 July 2015 Thai readers’ eye movements, just to the left of word-centre. The issue addressed by the research Available online 12 May 2016 described here is whether eye movements of Thai children also show these oculomotor patterns. Here the effect of first and last character frequency and word frequency on the eye movements of 18 Thai chil- Keywords: dren when silently reading normal unspaced and spaced text was investigated. Linear mixed-effects Eye movements model analyses of viewing time measures (first fixation duration, single fixation duration, and gaze dura- Children’s reading Landing site distribution tion) and of landing site location revealed that Thai children’s eye movement patterns were similar to Thai text their adult counterparts. Both first character frequency and word frequency played important roles in Thai children’s landing sites; children tended to land their eyes further into words, close to the word cen- tre, if the word began with higher frequency first characters, and this effect was facilitated in higher fre- quency words.
    [Show full text]
  • Das Deutsche Schriftsystem (Compu-Skript)
    PD Dr. Wolfgang Schindler. LMU München. Compu-Skript „Das deutsche Schriftsystem“. Version 27.09.2021. Seite 1 E-Mail: Wolfgang.Schindler[ätt]germanistik.uni-muenchen.de Web: http://wolfgang-schindler.userweb.mwn.de/index.html Das deutsche Schriftsystem 1 (Literaturverzeichnis am Ende!) 1 Zur Evolution von Schreibgrammatik Wenn wir in die römische Antike zurückblicken, fällt uns auf, dass es viele Schriftdokumente, bei- spielsweise Inschriften, gibt, die in CAPITALIS und in der sog. SCRIPTIO CONTINUA verfasst sind: Inschrift2 Trajan-Säule in Rom 112/113 n. Chr. Wenn wir das in einer großbuchstabigen durchgängigen Schreibung im Deutschen nachahmen, dann sieht das ungefähr so aus: 3 DIROMERHABENTATSEHLIHOFTERSONEWORTZWISENRAUMEGESRIBEN Die Römer haben tatsächlich öfters ohne Wortzwischenräume geschrieben 1 Dieses Typoskript ist aus einer Vorlesungsgrundlage entstanden. Es ist inspiriert von Beatrice Primus’ Arbeiten, ins- besondere ihrer Vorlesung "Das Schriftsystem des Deutschen“ (Köln, 2009)! Der Interpunktionsteil ist inspiriert von Ursula Bredels Arbeiten. Zu nennen sind auch die Einflüsse des Wortschreibungsteils in der Grammatik von Peter Eisenberg (Grundriss, Bd. 1., Kap. 8). Den Keim legte vor langer Zeit eine Vorlesung von Hartmut Günther, die er während meines Studiums in München hielt. Und natürlich sind meine Gedanken von den vielen Arbeiten inspiriert, die im Literaturverzeichnis am Ende aufgeführt sind. 2 Für die Interessierten hier eine Version mit einer Gliederung und mit einer ungefähren Übersetzung: SENATVS·POPVLVSQVE·ROMANVS / IMP·CAESARI·DIVI·NERVAE·F·NERVAE / TRAIANO·AVG·GERM·DACICO·PONTIF / MAXIMO·TRIB·POT·XVII·IMP·VI·COS·VI·P·P / AD·DECLARANDVM·QVANTAE·ALTITVDINIS / MONS·ET·LOCVS·TANT<IS·OPER>IBVS·SIT·EGESTVS Die Inschrift enthält zwar keine Spatien, aber MITTELPUNKTE, die meist worttrennend eingesetzt werden.
    [Show full text]
  • Proposal to Encode the Grantha Script in Unicode §1. Introduction
    Proposal to encode the Grantha script in Unicode Shriramana Sharma (jamadagni-at-gmail-dot-com), India 2009-Oct-24 Author’s Note This proposal started out as a joint effort of Elmar Kniprath of Germany and myself. However, Mr Kniprath has decided for personal reasons to retire shortly before its submission. This proposal as it stands contains a lot of material contributed by him, but I assume full responsibility for any errors or omissions. I however retain the words “we”, “us” etc in their generic sense. On the need for a separate proposal We are aware that a proposal for encoding Grantha (L2/09-141) has been submitted by Naga Ganesan in April 2009. We have thoroughly studied it and compared it with a parallel proposal on which we was working since summer 2008. We decided to continue our work and submit an independent proposal for the following reasons: 1. Mr Ganesan’s proposal contains several false assertions. We have submitted our objections as L2/09-316 “Comments on Mr Ganesan’s Grantha Proposal”. 2. Mr Ganesan’s proposal does not consider the use of Grantha for Vedic Sanskrit. Since a large part of the current user community of Grantha is the Vedic scholars of Tamil Nadu, it is very important that Vedic Sanskrit be provided for in encoding. 3. Efforts to work together with Mr Ganesan to submit a unified proposal failed. §1. Introduction The Grantha script is an Indic script descended from Brahmi, still being used in its modern form in parts of South India, especially Tamil Nadu and to a lesser extent in Sri Lanka and other places.
    [Show full text]
  • Schriftzeichen
    Annette Hornbacher (Ethnologie), Sabine Neumann (Kunstgeschichte Ostasiens), Laura Willer (Papyrologie) Schriftzeichen Schriftzeichen lassen sich als die einzelnen Symbole definieren, aus denen ein Schriftsystem besteht. Den verschiedenen Typen von Schriftsystemen entsprechend gibt es verschiedene Arten von Schriftzeichen. Das System, das uns in Europa am vertrautesten ist, ist das alphabetische, bei dem sich mit Hilfe einer eng begrenzten Anzahl von Symbolen, den Buchstaben, jegliches Wort darstellen lässt, abhängig von der Kombination der Buchstaben. Zu dieser Art Schriftzeichen zählen das lateinische, griechische und kyrillische Alphabet. Sie funktionieren zwar alle auf dieselbe Weise, bestehen jedoch aus unterschiedlichen, voneinander abgeleiteten Schriftzeichen; die lateinischen und kyrillischen sind jeweils eine Weiterentwicklung der griechischen, die selbst eine Weiterentwicklung der phönizischen sind. Eine ältere Form des alphabetischen Systems stellen Schriften dar, die nur oder hauptsächlich aus Konsonantenzeichen bestehen. Zu noch heute gebräuch- lichen Schriftsystemen dieser Art gehören das Hebräische (Abb. 1) und Arabische. Im Arabischen werden kurze Vokale, falls überhaupt, mit Hilfe diakritischer Zeichen dargestellt.1 Im Hebräischen können lange Vokale durch Konsonantenzeichen, die sogenannten matres lectionis, oder sämtliche Vokale durch diakritische Zeichen dar- gestellt werden.2 Silbenschriften, bei denen ein Schriftzeichen, das Syllabogramm, den Lautwert einer ganzen Silbe wiedergibt, benötigen eine wesentlich höhere
    [Show full text]
  • Recognition of Punctuation in Voiced and Unvoiced Speech for Ib-CET*
    Recognition of Punctuation in Voiced And Unvoiced Speech For Ib-CET Jiangang Liu School of Foreign Languages, Southeast University, Nanjing, Jiangsu Province 210096, China Institute of Modern Educational Technology Southeast University, Nanjing, Jiangsu Province 210096, China (Contact E-mail: [email protected]) Chen Li School of Foreign Languages, Southeast University, Nanjing, Jiangsu Province 210096, China Institute of Modern Educational Technology Southeast University, Nanjing, Jiangsu Province 210096, China Li Zhao Institute of Modern Educational Technology Southeast University, Nanjing, Jiangsu Province 210096, China School of Information Science and Engineering Southeast University, Nanjing, Jiangsu Province 210096, China Abstract The paper discusses the feasibility of punctuation recognition in oral delivery through voiced and unvoiced speech detection. The paper is firstly focused on introduction of iB-CET, in an attempt to clear away possible illiteracy of linguistic perception and speech recognition technology. Secondly, the paper providess some touches on how linguists interpret punctuation, thus inducing the third part in interpreting engineer’s concept on punctuation relying on Bark wavelet transform parameters and subspace analysis in speech punctuation recognition. Finally, through experiments, the paper convinces readers that punctuation in speech can be accomplished, hence leaves much thought for the notion of punctuation recognition in speech for oral delivery. Keywords: punctuation; voiced and unvoiced; iB-CET I. Introduction The present Internet-base College English Test (iB-CET for short) is getting popular, though some scholars are still arguing against its credibility and feasibility. Designed and planned in May, 2007(Jin Yan, & Wu Jiang, 2009), the iB-CET got started among 53 colleges and universities in 20th of December, 2008 so that China made a milestone in computer-based test rather than paper-based test for high education in history(Liu Jian-gang, & Dong Jing, 2011).
    [Show full text]
  • How Children Learn to Write Words
    How Children Learn to Write Words How Children Learn to Write Words Rebecca TReiman and bReTT KessleR 1 3 Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford New York Auckland Cape Town Dar es Salaam Hong Kong Karachi Kuala Lumpur Madrid Melbourne Mexico City Nairobi New Delhi Shanghai Taipei Toronto With offices in Argentina Austria Brazil Chile Czech Republic France Greece Guatemala Hungary Italy Japan Poland Portugal Singapore South Korea Switzerland Thailand Turkey Ukraine Vietnam Oxford is a registered trademark of Oxford University Press in the UK and certain other countries. Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016 © Oxford University Press 2014 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by license, or under terms agreed with the appropriate reproduction rights organization. Inquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above. You must not circulate this work in any other form and you must impose this same condition on any acquirer. A copy of this book’s Catalog-in-Publication Data is on file with the Library of Congress ISBN 978–0–19–990797–7
    [Show full text]
  • The Alphabetic Revolution, Writing Systems, and Scribal Training in Ancient Israel
    Cambridge Semitic Languages and Cultures Hornkohl and Hornkohl and New Perspectives in Biblical and Khan (eds) New Perspectives in Biblical and Rabbinic Hebrew Rabbinic Hebrew Aaron D. Hornkohl and Geoffrey Khan (eds) EDITED BY AARON D. HORNKOHL AND GEOFFREY KHAN Most of the papers in this volume originated as presenta� ons at the conference Biblical Hebrew and Rabbinic Hebrew: New Perspecti ves in Philology and Linguisti ,cs which was held at the University of Cambridge, 8–10th July, 2019. The aim of the conference was to build bridges between various strands of research in the fi eld of Hebrew language studies that rarely meet, namely philologists working on Biblical Hebrew, philologists working on New Perspectives in Biblical Rabbinic Hebrew and theore� cal linguists. The volume is the published outcome of this ini� a� ve. It contains peer-reviewed papers and Rabbinic Hebrew in the fi elds of Biblical and Rabbinic Hebrew that advance the fi eld by the philological inves� ga� on of primary sources and the applica� on of cu� ng-edge linguis� c theory. These include contribu� ons by established scholars and by students and early career researchers. This is the author-approved edi� on of this Open Access � tle. As with all Open Book publica� ons, this en� re book is available to read for free on the publisher’s website. Printed and digital edi� ons, together with supplementary digital material, can also be found here: www.openbookpublishers.com Cover image: Genizah fragment of the Hebrew Bible with Babylonian vocalisati on (Num. 18.27-28, Cambridge University Library T-S A38.12; courtesy of the Syndics of Cambridge University Library).
    [Show full text]
  • Wikification for Scriptio Continua
    Wikification for Scriptio Continua Yugo Murawaki, Shinsuke Mori fGraduate School of Informatics, Academic Center for Computing and Media Studiesg, Kyoto University Yoshida-honmachi, Sakyo-ku, Kyoto, 606-8501, Japan fmurawaki, [email protected] Abstract The fact that Japanese employs scriptio continua, or a writing system without spaces, complicates the first step of an NLP pipeline. Word segmentation is widely used in Japanese language processing, and lexical knowledge is crucial for reliable identification of words in text. Although external lexical resources like Wikipedia are potentially useful, segmentation mismatch prevents them from being straightfor- wardly incorporated into the word segmentation task. If we intentionally violate segmentation standards with the direct incorporation, quantitative evaluation will be no longer feasible. To address this problem, we propose to define a separate task that directly links given texts to an external resource, that is, wikification in the case of Wikipedia. By doing so, we can circumvent segmentation mismatch that may not necessarily be important for downstream applications. As the first step to realize the idea, we design the task of Japanese wikification and construct wikification corpora. We annotated subsets of the Balanced Corpus of Contemporary Written Japanese plus Twitter short messages. We also implement a simple wikifier and investigate its performance on these corpora. Keywords: corpus annotation, wikification, word segmentation 1. Introduction However, Wikipedia, as it is, cannot be used as a word dictionary due to segmentation mismatch. Every de- Wikification is the task of detecting concept mentions signer of the word segmentation task needs to define her in text and disambiguating them into their corresponding own segmentation standard because it is left unspecified Wikipedia articles.
    [Show full text]
  • Epigraphic Treebanks: Some Considerations from a Work in Progress
    Epigraphic Treebanks: Some Considerations from a Work in Progress Francesca Dell’Oro and Giuseppe G. A. Celano Pre-Published for FirstDrafts@Classics@, July 26, 2019* Abstract The guidelines of the Ancient Greek Dependency Treebank 2.0 have been written to annotate Ancient Greek texts. The epigraphic texts, however, pose a challenge for those carrying out morphosyntactic annotation: should we remain as close as possible to the actual epigraphic text, or represent it in an interpreted and normalized version? How should all epigraphic peculiarities which do not have standard editorial representation, such as, for example, punctuation marks, be treated? A small corpus such as that of the inscriptions of the Euboean colonies of Sicily of the archaic and classical period has allowed us to test different options and evaluate the annotation challenges. This contribution is the result of a discussion about the advantages and disadvantages of often opposed annotation possibilities. We present here our first proposal for an adaptation of the guidelines to analyse the morphosyntax of inscriptions, which we hope will stimulate discussion between epigraphists and linguists. In particular, we propose to try to stick to the epigraphic evidence as far as possible and therefore render its complexities (e.g., local alphabets, dialect variants not attested in literary texts, ellipsis, punctuation marks, and word forms which can be linguistically interpreted differently), while trying to preserve consistency with the annotation of literary texts. * This paper was accepted for publication in the proceedings of the DHANT conference (= Digital Humanities and Antiquity, Grenoble, 2-4 September 2015) at the end of 2016. 1 1.
    [Show full text]
  • N4704-Medieval-Punct.Pdf
    ISO/IEC JTC1/SC2/WG2 N4704 L2/15-327 2015-12-19 Universal Multiple-Octet Coded Character Set International Organization for Standardization Organisation internationale de normalisation Международная организация по стандартизации Doc Type: Working Group Document Title: Proposal to add Medievalist punctuation characters to the UCS Source: UC Berkeley Script Encoding Initiative (Universal Scripts Project) Authors: Michael Everson (editor), Peter Baker, Florian Grammel, Odd Einar Haugen Status: Liaison Contribution Action: For consideration by JTC1/SC2/WG2 and UTC Date: 2015-12-19 Replaces: N3193 (L2/07-004) 2007-01-09 1. Introduction. A set of characters used by specialists in medieval European philology, palaeography, and linguistics has long been absent from the Universal Character Set. This proposal requests 21 punctuation characters be added for support of medieval European linguistic and literary research and publication. If this proposal is accepted, the following characters will be encoded: ⹅ 2E45 PARAGRAPHUS MARK ⹆ 2E46 POSITURA MARK ⹇ 2E47 COLON WITH SIDEWAYS REVERSED RAISED COMMA ⹈ 2E48 COLON WITH RAISED POSITURA MARK ⹉ 2E49 TWO DOTS OVER COMMA ⹊ 2E4A PUNCTUS ELEVATUS MARK ⹋ 2E4B SIDEWAYS REVERSED MIDDLE COMMA ⹌ 2E4C PUNCTUS FLEXUS MARK ⹍ 2E4D PUNCTUS VERSUS MARK ⹎ 2E4E LOW PUNCTUS VERSUS MARK ⹏ 2E4F PUNCTUS INTERROGATIVUS MARK ⹐ 2E50 PUNCTUS EXCLAMATIVUS MARK ⹑ 2E51 MEDIEVAL COMMA ⹒ 2E52 HIGH DOT ⹓ 2E53 SIMPLEX DUCTUS MARK ⹔ 2E54 DOTTED SOLIDUS ⹕ 2E55 SIGNE DE RENVOI ⹖ 2E56 MIDDLE COMMA ⹗ 2E57 TILDE WITH DOT ABOVE AND DOT BELOW 1 ⹘ 2E58 VERTICAL FIVE DOTS ⹙ 2E59 TRIPLE DAGGER 2. Functions of Medieval punctuation. Modern European punctuation comprises a set of named marks which are used with relatively well-established usages. Medieval punctuation was based on discursive functions; in some areas and at some times different configurations of dots were used to express those functions.
    [Show full text]