Deliverable D3.3.-B Third Batch of Language Resources

Total Page:16

File Type:pdf, Size:1020Kb

Deliverable D3.3.-B Third Batch of Language Resources Contract no. 271022 CESAR Central and South-East European Resources Project no. 271022 Deliverable D3.3.-B Third batch of language resources: actions on resources Version No. 1.1 07/02/2013 D3.3-B V 1.1 Page 1 of 81 Contract no. 271022 Document Information Deliverable number: D3.3.-B Deliverable title: Third batch of language resources: actions on resources Due date of deliverable: 31/01/2013 Actual submission date of 08/02/2013 deliverable: Main Author(s): György Szaszák (BME-TMIT) Participants: Mátyás Bartalis, András Balog (BME-TMIT) Željko Agić, Božo Bekavac, Nikola Ljubešić, Sanda Martinčić- Ipšić, Ida Raffaelli, add Jan Šnajder, Krešo Šojat, Vanja Štefanec, Marko Tadić (FFZG) Cvetana Krstev, Duško Vitas, Miloš Utvić, Ranka Stanković, Ivan Obradović, Miljana Milanović, Anđelka Zečević, Mirko Spasić (UBG) Svetla Koeva, Tsvetana Dimitrova, Ivelina Stoyanova (IBL) Tibor Pintér (HASRIL) Mladen Stanojević, Mirko Spasić, Natalija Kovačević, Uroš Milošević, Nikola Dragović, Jelena Jovanović (IPUP) Anna Kibort, Łukasz Kobyliński, Mateusz Kopeć, Katarzyna Krasnowska, Michał Lenart, Marek Maziarz, Marcin Miłkowski, Bartłomiej Nitoń, Agnieszka Patejuk, Aleksander Pohl, Piotr Przybyła, Agata Savary, Filip Skwarski, Jan Szejko, Jakub Waszczuk, Marcin Woliński, Alina Wróblewska, Bartosz Zaborowski, Marcin Zając (IPIPAN) Piotr Pęzik, Łukasz Dróżdż, Maciej Buczek (ULodz) Radovan Garabík, Adriána Žáková (LSIL) Internal reviewer: HASRIL Workpackage: WP3 Workpackage title: Enhancing language resources D3.3-B V 1.1 Page 2 of 81 Contract no. 271022 Workpackage leader: IPIPAN Dissemination Level: PP Version: 1.0 Keywords: Upload, interoperability, standardization, harmonization, upgrade, extension, linking History of Versions Version Date Status Name of the Author Contributions Description/ (Partner) Approval Level 1.1 08/02/2012 final Tamás Váradi (HASRIL) proofreading 1.0 07/02/2012 final Tibor Pintér (HASRIL) proofreading 0.9 06/02/2013 final György Szaszák (BME- editing, cross- TIMIT) checking 0.8 05/02/2013 final Tibor Pintér (HASRIL) editing 0.7 19/01/2013 draft György Szaszák (BME- From all TMIT), Maciej Partners Ogrodniczuk (IPIPAN) Executive summary This deliverable entitled D3.3.-B. is a supplement for the publicly available deliverable D3.3., treating third upload batch of language resources and tools. In this deliverable emphasis is put on the description of the work and activities related to each and every resource included in the third upload batch. Work for resources or tools uploaded in the first or second batch, but extended or updated for the third batch is also documented. D3.3-B V 1.1 Page 3 of 81 Contract no. 271022 Table of Contents Table of Contents Abbreviations .................................................................................................................................. 8 0. Scope .............................................................................................................................................. 9 0.1. Actions – upgrading resources ..................................................................................................... 9 0.2. Actions – extending and linking resources .............................................................................. 9 0.3. Actions – Aligning resources across languages ................................................................... 10 0.4. Management of the 3rd batch ..................................................................................................... 11 1. HASRIL resources .................................................................................................................... 12 1.23. Hungarian National Corpus ...................................................................................................... 12 1.24. HUCOMTECH multimodal database ...................................................................................... 12 1.25. BEA Hungarian spontaneous speech database ................................................................. 13 1.26. Hungarian kindergarten language corpus ......................................................................... 13 1.27. ht‐online (lexical resource of the Hungarian outside Hungary) ................................. 14 1.28. Hungarian concise dictionary (with sample sentences) ................................................ 14 1.29. Hungarian historical corpus .................................................................................................... 14 1.30. N‐grams from Hungarian National Corpus ......................................................................... 14 2. BME‐TMIT resources .............................................................................................................. 16 2.14 Hungarian Book (Egri csillagok/Eclipse of the Crescent Moon by Géza Gárdonyi) Reading Speech and Aligned Text Selection Database .............................................................. 16 2.15 Hungarian Poem (János vitéz/John the Valiant by Sándor Petőfi) Reading Speech and Aligned Text Selection Database .............................................................................................. 16 2.16 Hungarian Parliamentary Speech and Aligned Text Selection Database ................. 16 2.17 Named entity lexical database ................................................................................................. 16 2.18 Hungarian formant database ................................................................................................... 16 2.19 Graphical query interface for Hungarian Read Speech Precisely Labelled Parallel Speech Corpus Collection .................................................................................................................... 17 2.20 Medical Speech Database .......................................................................................................... 17 2.21. Automatic Prosodic Segmenter ............................................................................................. 18 2.22. Hungarian Phonetic Transcriber .......................................................................................... 18 2.23 Hungarian MALACH Speech Database ................................................................................... 18 2.24 Hungarian Broadcast Conversation Database from the Catholic Radio of Eger (EKR) .......................................................................................................................................................... 19 2.25 Accent marker database for Hungarian written sentences ........................................... 19 3. FFZG resources ......................................................................................................................... 20 3.13. Croatian Language Web Services (hrWS) ........................................................................... 20 3.14. Croatian Translations of Acquis Communautaire (hrAcquis) ..................................... 20 3.15. Croatian National Cropus v3.0 (HNK v3.0) ......................................................................... 21 3.16. Corpus of Narodne novine (NNCorp) .................................................................................... 21 3.17. Croatian n‐grams (hrNgrams) ................................................................................................ 22 3.18. Croatian Morphological Lexicon v5.0 (HML5) .................................................................. 22 3.19. Orwell 1984 Croatian (hr1984) ............................................................................................. 23 3.20. Croatian Wordnet (CroWN) ..................................................................................................... 23 3.21. Croatian ACD (hrACD) ............................................................................................................... 24 3.22. Croatian Weather Dialogue Corpus (CWEDIC) .................................................................. 24 D3.3-B V 1.1 Page 4 of 81 Contract no. 271022 3.23. CESAR Aligned Wikipedia Headwords List (hrACD) ....................................................... 24 3.24. Croatian and Slovene NERC models for Stanford NERC (hrStanfordNERC, slStanfordNERC) ..................................................................................................................................... 25 3.25. Coral Corpus Aligner (Coral) ................................................................................................... 25 3.26. Croatian Sentiment Lexicon (CroSentiLex) ........................................................................ 26 4. IPIPAN resources ..................................................................................................................... 27 4.1. Polish Sejm Corpus ........................................................................................................................ 27 4.2. PoliMorf Inflectional Dictionary ............................................................................................... 27 4.3. Polish WordNet .............................................................................................................................. 27 4.4. Nerf – Named Entity Recognition Tool ................................................................................... 27 4.13. Morfeusz ......................................................................................................................................... 28 4.19. Valence
Recommended publications
  • International Computer Science Institute Activity Report 2007
    International Computer Science Institute Activity Report 2007 International Computer Science Institute, 1947 Center Street, Suite 600, Berkeley, CA 94704-1198 USA phone: (510) 666 2900 (510) fax: 666 2956 [email protected] http://www.icsi.berkeley.edu PRINCIPAL 2007 SPONSORS Cisco Defense Advanced Research Projects Agency (DARPA) Disruptive Technology Offi ce (DTO, formerly ARDA) European Union (via University of Edinburgh) Finnish National Technology Agency (TEKES) German Academic Exchange Service (DAAD) Google IM2 National Centre of Competence in Research, Switzerland Microsoft National Science Foundation (NSF) Qualcomm Spanish Ministry of Education and Science (MEC) AFFILIATED 2007 SPONSORS Appscio Ask AT&T Intel National Institutes of Health (NIH) SAP Volkswagen CORPORATE OFFICERS Prof. Nelson Morgan (President and Institute Director) Dr. Marcia Bush (Vice President and Associate Director) Prof. Scott Shenker (Secretary and Treasurer) BOARD OF TRUSTEES, JANUARY 2008 Prof. Javier Aracil, MEC and Universidad Autónoma de Madrid Prof. Hervé Bourlard, IDIAP and EPFL Vice Chancellor Beth Burnside, UC Berkeley Dr. Adele Goldberg, Agile Mind, Inc. and Pharmaceutrix, Inc. Dr. Greg Heinzinger, Qualcomm Mr. Clifford Higgerson, Walden International Prof. Richard Karp, ICSI and UC Berkeley Prof. Nelson Morgan, ICSI (Director) and UC Berkeley Dr. David Nagel, Ascona Group Prof. Prabhakar Raghavan, Stanford and Yahoo! Prof. Stuart Russell, UC Berkeley Computer Science Division Chair Mr. Jouko Salo, TEKES Prof. Shankar Sastry, UC Berkeley, Dean
    [Show full text]
  • CLIB 2016 Proceedings
    The Second International Conference Computational ​ Linguistics in Bulgaria (CLIB 2016) is organised within ​ the Operation for Support for International Scientific Conferences Held in Bulgaria of the National Science ​ Fund Grant № ДПМНФ 01/9 of 11 Aug 2016. National Science Fund ​ ​ CLIB 2016 is organised by: The Department of Computational Linguistics, Institute for Bulgarian Language, Bulgarian Academy of Sciences PUBLICATION AND CATALOGUING INFORMATION Title: Proceedings of the Second International Conference Computational Linguistics in Bulgaria (CLIB 2016) ISSN: 2367­5675 (online) Published and The Institute for Bulgarian Language Prof. Lyubomir ​ distributed by: Andreychin, Bulgarian Academy of Sciences ​ Editorial address: Institute for Bulgarian Language Prof. Lyubomir ​ Andreychin, Bulgarian Academy of Sciences ​ 52 Shipchenski prohod blvd., bldg. 17 Sofia 1113, Bulgaria +359 2/ 872 23 02 Copyright of each paper stays with the respective authors. The works in the Proceedings are licensed under a Creative Commons Attribution 4.0 International Licence (CC BY 4.0). Licence details: http://creativecommons.org/licenses/by/4.0 ​ Proceedings of the Second International Conference Computational Linguistics in Bulgaria 9 September 2016 Sofia, Bulgaria PREFACE We are excited to welcome you to the second edition of the International Conference Computational ​ Linguistics in Bulgaria (CLIB 2016) in Sofia, Bulgaria! ​ CLIB aspires to foster the NLP community in Bulgaria and further the cooperation among researchers working in NLP for Bulgarian around the world. The need for a conference dedicated to NLP research dealing with or applicable to Bulgarian has been felt for quite some time. We believe that building a strong community of researchers and teams who have chosen to work on Bulgarian is a key factor to meeting the challenges and requirements posed to computational linguistics and NLP in Bulgaria.
    [Show full text]
  • D1.3 Deliverable Title: Action Ontology and Object Categories Type
    Deliverable number: D1.3 Deliverable Title: Action ontology and object categories Type (Internal, Restricted, Public): PU Authors: Daiva Vitkute-Adzgauskiene, Jurgita Kapociute, Irena Markievicz, Tomas Krilavicius, Minija Tamosiunaite, Florentin Wörgötter Contributing Partners: UGOE, VMU Project acronym: ACAT Project Type: STREP Project Title: Learning and Execution of Action Categories Contract Number: 600578 Starting Date: 01-03-2013 Ending Date: 30-04-2016 Contractual Date of Delivery to the EC: 31-08-2015 Actual Date of Delivery to the EC: 04-09-2015 Page 1 of 16 Content 1. EXECUTIVE SUMMARY .................................................................................................................... 2 2. INTRODUCTION .............................................................................................................................. 3 3. OVERVIEW OF THE ACAT ONTOLOGY STRUCTURE ............................................................................ 3 4. OBJECT CATEGORIZATION BY THEIR SEMANTIC ROLES IN THE INSTRUCTION SENTENCE .................... 9 5. HIERARCHICAL OBJECT STRUCTURES ............................................................................................. 13 6. MULTI-WORD OBJECT NAME RESOLUTION .................................................................................... 15 7. CONCLUSIONS AND FUTURE WORK ............................................................................................... 16 8. REFERENCES ................................................................................................................................
    [Show full text]
  • A Resource of Corpus-Derived Typed Predicate Argument Structures for Croatian
    CROATPAS: A Resource of Corpus-derived Typed Predicate Argument Structures for Croatian Costanza Marini Elisabetta Ježek University of Pavia University of Pavia Department of Humanities Department of Humanities costanza.marini01@ [email protected] universitadipavia.it Abstract 2014). The potential uses and purposes of the resource range from multilingual The goal of this paper is to introduce pattern linking between compatible CROATPAS, the Croatian sister project resources to computer-assisted language of the Italian Typed-Predicate Argument learning (CALL). Structure resource (TPAS1, Ježek et al. 2014). CROATPAS is a corpus-based 1 Introduction digital collection of verb valency structures with the addition of semantic Nowadays, we live in a time when digital tools type specifications (SemTypes) to each and resources for language technology are argument slot, which is currently being constantly mushrooming all around the world. developed at the University of Pavia. However, we should remind ourselves that some Salient verbal patterns are discovered languages need our attention more than others if following a lexicographical methodology they are not to face – to put it in Rehm and called Corpus Pattern Analysis (CPA, Hegelesevere’s words – “a steadily increasing Hanks 2004 & 2012; Hanks & and rather severe threat of digital extinction” Pustejovsky 2005; Hanks et al. 2015), (2018: 3282). According to the findings of initiatives such as whereas SemTypes – such as [HUMAN], [ENTITY] or [ANIMAL] – are taken from a the META-NET White Paper Series (Tadić et al. shallow ontology shared by both TPAS 2012; Rehm et al. 2014), we can state that and the Pattern Dictionary of English Croatian is unfortunately among the 21 out of 24 Verbs (PDEV2, Hanks & Pustejovsky official languages of the European Union that are 2005; El Maarouf et al.
    [Show full text]
  • 3 Corpus Tools for Lexicographers
    Comp. by: pg0994 Stage : Proof ChapterID: 0001546186 Date:14/5/12 Time:16:20:14 Filepath:d:/womat-filecopy/0001546186.3D31 OUP UNCORRECTED PROOF – FIRST PROOF, 14/5/2012, SPi 3 Corpus tools for lexicographers ADAM KILGARRIFF AND IZTOK KOSEM 3.1 Introduction To analyse corpus data, lexicographers need software that allows them to search, manipulate and save data, a ‘corpus tool’. A good corpus tool is the key to a comprehensive lexicographic analysis—a corpus without a good tool to access it is of little use. Both corpus compilation and corpus tools have been swept along by general technological advances over the last three decades. Compiling and storing corpora has become far faster and easier, so corpora tend to be much larger than previous ones. Most of the first COBUILD dictionary was produced from a corpus of eight million words. Several of the leading English dictionaries of the 1990s were produced using the British National Corpus (BNC), of 100 million words. Current lexico- graphic projects we are involved in use corpora of around a billion words—though this is still less than one hundredth of one percent of the English language text available on the Web (see Rundell, this volume). The amount of data to analyse has thus increased significantly, and corpus tools have had to be improved to assist lexicographers in adapting to this change. Corpus tools have become faster, more multifunctional, and customizable. In the COBUILD project, getting concordance output took a long time and then the concordances were printed on paper and handed out to lexicographers (Clear 1987).
    [Show full text]
  • Collocation Classification with Unsupervised Relation Vectors
    Collocation Classification with Unsupervised Relation Vectors Luis Espinosa-Anke1, Leo Wanner2,3, and Steven Schockaert1 1School of Computer Science, Cardiff University, United Kingdom 2ICREA and 3NLP Group, Universitat Pompeu Fabra, Barcelona, Spain fespinosa-ankel,schockaerts1g@cardiff.ac.uk [email protected] Abstract Morphosyntactic relations have been the focus of work on unsupervised relational similarity, as Lexical relation classification is the task of it has been shown that verb conjugation or nomi- predicting whether a certain relation holds be- nalization patterns are relatively well preserved in tween a given pair of words. In this pa- vector spaces (Mikolov et al., 2013; Pennington per, we explore to which extent the current et al., 2014a). Semantic relations pose a greater distributional landscape based on word em- beddings provides a suitable basis for classi- challenge (Vylomova et al., 2016), however. In fication of collocations, i.e., pairs of words fact, as of today, it is unclear which operation per- between which idiosyncratic lexical relations forms best (and why) for the recognition of indi- hold. First, we introduce a novel dataset with vidual lexico-semantic relations (e.g., hyperonymy collocations categorized according to lexical or meronymy, as opposed to cause, location or ac- functions. Second, we conduct experiments tion). Still, a number of works address this chal- on a subset of this benchmark, comparing it in lenge. For instance, hypernymy has been modeled particular to the well known DiffVec dataset. In these experiments, in addition to simple using vector concatenation (Baroni et al., 2012), word vector arithmetic operations, we also in- vector difference and component-wise squared vestigate the role of unsupervised relation vec- difference (Roller et al., 2014) as input to linear tors as a complementary input.
    [Show full text]
  • Comparative Evaluation of Collocation Extraction Metrics
    Comparative Evaluation of Collocation Extraction Metrics Aristomenis Thanopoulos, Nikos Fakotakis, George Kokkinakis Wire Communications Laboratory Electrical & Computer Engineering Dept., University of Patras 265 00 Rion, Patras, Greece {aristom,fakotaki,gkokkin}@wcl.ee.upatras.gr Abstract Corpus-based automatic extraction of collocations is typically carried out employing some statistic indicating concurrency in order to identify words that co-occur more often than expected by chance. In this paper we are concerned with some typical measures such as the t-score, Pearson’s χ-square test, log-likelihood ratio, pointwise mutual information and a novel information theoretic measure, namely mutual dependency. Apart from some theoretical discussion about their correlation, we perform comparative evaluation experiments judging performance by their ability to identify lexically associated bigrams. We use two different gold standards: WordNet and lists of named-entities. Besides discovering that a frequency-biased version of mutual dependency performs the best, followed close by likelihood ratio, we point out some implications that usage of available electronic dictionaries such as the WordNet for evaluation of collocation extraction encompasses. dependence, several metrics have been adopted by the 1. Introduction corpus linguistics community. Typical statistics are Collocational information is important not only for the t-score (TSC), Pearson’s χ-square test (χ2), log- second language learning but also for many natural likelihood ratio (LLR) and pointwise mutual language processing tasks. Specifically, in natural information (PMI). However, usually no systematic language generation and machine translation it is comparative evaluation is accomplished. For example, necessary to ensure generation of lexically correct Kita et al. (1993) accomplish partial and intuitive expressions; for example, “strong”, unlike “powerful”, comparisons between metrics, while Smadja (1993) modifies “coffee” but not “computers”.
    [Show full text]
  • Conference Abstracts
    EIGHTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION Held under the Patronage of Ms Neelie Kroes, Vice-President of the European Commission, Digital Agenda Commissioner MAY 23-24-25, 2012 ISTANBUL LÜTFI KIRDAR CONVENTION & EXHIBITION CENTRE ISTANBUL, TURKEY CONFERENCE ABSTRACTS Editors: Nicoletta Calzolari (Conference Chair), Khalid Choukri, Thierry Declerck, Mehmet Uğur Doğan, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, Stelios Piperidis. Assistant Editors: Hélène Mazo, Sara Goggi, Olivier Hamon © ELRA – European Language Resources Association. All rights reserved. LREC 2012, EIGHTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION Title: LREC 2012 Conference Abstracts Distributed by: ELRA – European Language Resources Association 55-57, rue Brillat Savarin 75013 Paris France Tel.: +33 1 43 13 33 33 Fax: +33 1 43 13 33 30 www.elra.info and www.elda.org Email: [email protected] and [email protected] Copyright by the European Language Resources Association ISBN 978-2-9517408-7-7 EAN 9782951740877 All rights reserved. No part of this book may be reproduced in any form without the prior permission of the European Language Resources Association ii Introduction of the Conference Chair Nicoletta Calzolari I wish first to express to Ms Neelie Kroes, Vice-President of the European Commission, Digital agenda Commissioner, the gratitude of the Program Committee and of all LREC participants for her Distinguished Patronage of LREC 2012. Even if every time I feel we have reached the top, this 8th LREC is continuing the tradition of breaking previous records: this edition we received 1013 submissions and have accepted 697 papers, after reviewing by the impressive number of 715 colleagues.
    [Show full text]
  • Collocation Extraction Using Parallel Corpus
    Collocation Extraction Using Parallel Corpus Kavosh Asadi Atui1 Heshaam Faili1 Kaveh Assadi Atuie2 (1)NLP Laboratory, Electrical and Computer Engineering Dept., University of Tehran, Iran (2)Electrical Engineering Dept., Sharif University of Technology, Iran [email protected],[email protected],[email protected] ABSTRACT This paper presents a novel method to extract the collocations of the Persian language using a parallel corpus. The method is applicable having a parallel corpus between a target language and any other high-resource one. Without the need for an accurate parser for the target side, it aims to parse the sentences to capture long distance collocations and to generate more precise results. A training data built by bootstrapping is also used to rank the candidates with a log-linear model. The method improves the precision and recall of collocation extraction by 5 and 3 percent respectively in comparison with the window-based statistical method in terms of being a Persian multi-word expression. KEYWORDS: Information and Content Extraction, Parallel Corpus, Under-resourced Languages Proceedings of COLING 2012: Posters, pages 93–102, COLING 2012, Mumbai, December 2012. 93 1 Introduction Collocation is usually interpreted as the occurrence of two or more words within a short space in a text (Sinclair, 1987). This definition however is not precise, because it is not possible to define a short space. It also implies the strategy that all traditional models had. They were looking for co- occurrences rather than collocations (Seretan, 2011). Consider the following sentence and its Persian translation1: "Lecturer issued a major and also impossible to solve problem." مدرس یک مشکل بزرگ و غیر قابل حل را عنوان کرد.
    [Show full text]
  • Morphological Doublets in Croatian: a Multi-Methodological Analysis
    Morphological Doublets in Croatian: A multi-methodological analysis By: Dario Lečić A thesis submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy The University of Sheffield Faculty of Arts and Humanities Department of Russian and Slavonic Studies 20 January, 2017 Acknowledgments Many a PhD student past and present will agree that doing a PhD is a time-consuming process with lots of ups and downs, motivational issues and even a number of nervous breakdowns. Having experienced all of these, I can only say that they were right. However, having reached the end of the tunnel, I have to admit that the feeling is great. I would like to use this opportunity to thank all the people who made this possible and who have helped me during these four years spent researching the intricate world of morphological doublets in Croatian. First of all, I would like to express my immense gratitude to my primary supervisor, Professor Neil Bermel from the Department or Russian and Slavonic Studies at the University of Sheffield for offering his guidance from day one. Our regular supervisory meetings as well as numerous e-mail exchanges have been eye-opening and I would not have been able to do this without you. I hope this dissertation will justify all the effort you have put into me as your PhD student. Even though the jurisdiction of the second supervisor as defined by the University of Sheffield officially stretches mostly to matters of the Doctoral Development Programme, my second supervisor, Dr Dagmar Divjak, nevertheless played a major role in this research as well, primarily in matters of statistics.
    [Show full text]
  • Collocation Classification with Unsupervised Relation Vectors
    Collocation Classification with Unsupervised Relation Vectors Luis Espinosa-Anke1, Leo Wanner2,3, and Steven Schockaert1 1School of Computer Science, Cardiff University, United Kingdom 2ICREA and 3NLP Group, Universitat Pompeu Fabra, Barcelona, Spain fespinosa-ankel,schockaerts1g@cardiff.ac.uk [email protected] Abstract Morphosyntactic relations have been the focus of work on unsupervised relational similarity, as Lexical relation classification is the task of it has been shown that verb conjugation or nomi- predicting whether a certain relation holds be- nalization patterns are relatively well preserved in tween a given pair of words. In this pa- vector spaces (Mikolov et al., 2013; Pennington per, we explore to which extent the current et al., 2014a). Semantic relations pose a greater distributional landscape based on word em- beddings provides a suitable basis for classi- challenge (Vylomova et al., 2016), however. In fication of collocations, i.e., pairs of words fact, as of today, it is unclear which operation per- between which idiosyncratic lexical relations forms best (and why) for the recognition of indi- hold. First, we introduce a novel dataset with vidual lexico-semantic relations (e.g., hyperonymy collocations categorized according to lexical or meronymy, as opposed to cause, location or ac- functions. Second, we conduct experiments tion). Still, a number of works address this chal- on a subset of this benchmark, comparing it in lenge. For instance, hypernymy has been modeled particular to the well known DiffVec dataset. In these experiments, in addition to simple using vector concatenation (Baroni et al., 2012), word vector arithmetic operations, we also in- vector difference and component-wise squared vestigate the role of unsupervised relation vec- difference (Roller et al., 2014) as input to linear tors as a complementary input.
    [Show full text]
  • Dish2vec: a Comparison of Word Embedding Methods in an Unsupervised Setting
    dish2vec: A Comparison of Word Embedding Methods in an Unsupervised Setting Guus Verstegen Student Number: 481224 11th June 2019 Erasmus University Rotterdam Erasmus School of Economics Supervisor: Dr. M. van de Velden Second Assessor: Dr. F. Frasincar Master Thesis Econometrics and Management Science Business Analytics and Quantitative Marketing Abstract While the popularity of continuous word embeddings has increased over the past years, detailed derivations and extensive explanations of these methods lack in the academic literature. This paper elaborates on three popular word embedding meth- ods; GloVe and two versions of word2vec: continuous skip-gram and continuous bag-of-words. This research aims to enhance our understanding of both the founda- tion and extensions of these methods. In addition, this research addresses instability of the methods with respect to the hyperparameters. An example in a food menu context is used to illustrate the application of these word embedding techniques in quantitative marketing. For this specific case, the GloVe method performs best according to external expert evaluation. Nevertheless, the fact that GloVe performs best is not generalizable to other domains or data sets due to the instability of the models and the evaluation procedure. Keywords: word embedding; word2vec; GloVe; quantitative marketing; food Contents 1 Introduction 3 2 Literature review 4 2.1 Discrete distributional representation of text . .5 2.2 Continuous distributed representation of text . .5 2.3 word2vec and GloVe ..............................7 2.4 Food menu related literature . .9 3 Methodology: the base implementation 9 3.1 Quantified word similarity: cosine similarity . 12 3.2 Model optimization: stochastic gradient descent . 12 3.3 word2vec ...................................
    [Show full text]