Semeval-2017 Task 2: Multilingual and Cross-Lingual Semantic Word Similarity
Total Page:16
File Type:pdf, Size:1020Kb
SemEval-2017 Task 2: Multilingual and Cross-lingual Semantic Word Similarity Jose Camacho-Collados*1, Mohammad Taher Pilehvar*2, Nigel Collier2 and Roberto Navigli1 1Department of Computer Science, Sapienza University of Rome 2Department of Theoretical and Applied Linguistics, University of Cambridge 1 collados,navigli @di.uniroma1.it { 2 mp792,nhc30}@cam.ac.uk { } Abstract word representation, a research field that has re- cently received massive research attention mainly This paper introduces a new task on Multi- as a result of the advancements in the use of neural lingual and Cross-lingual Semantic Word networks for learning dense low-dimensional se- Similarity which measures the semantic mantic representations, often referred to as word similarity of word pairs within and across embeddings (Mikolov et al., 2013; Pennington five languages: English, Farsi, German, et al., 2014). Almost any application in NLP that Italian and Spanish. High quality datasets deals with semantics can benefit from efficient se- were manually curated for the five lan- mantic representation of words (Turney and Pan- guages with high inter-annotator agree- tel, 2010). ments (consistently in the 0.9 ballpark). However, research in semantic representation These were used for semi-automatic con- has in the main focused on the English language struction of ten cross-lingual datasets. 17 only. This is partly due to the limited availabil- teams participated in the task, submitting ity of word similarity benchmarks in languages 24 systems in subtask 1 and 14 systems in other than English. Given the central role of subtask 2. Results show that systems that similarity datasets in lexical semantics, and given combine statistical knowledge from text the importance of moving beyond the barriers of corpora, in the form of word embeddings, the English language and developing language- and external knowledge from lexical re- independent and multilingual techniques, we felt sources are best performers in both sub- that this was an appropriate time to conduct a task tasks. More information can be found on that provides a reliable framework for evaluating the task website: http://alt.qcri. multilingual and cross-lingual semantic represen- org/semeval2017/task2/ . tation and similarity techniques. The task has 1 Introduction two related subtasks: multilingual semantic sim- ilarity (Section 1.1), which focuses on representa- Measuring the extent to which two words are se- tion learning for individual languages, and cross- mantically similar is one of the most popular re- lingual semantic similarity (Section 1.2), which search fields in lexical semantics, with a wide provides a benchmark for multilingual research range of Natural Language Processing (NLP) ap- that learns unified representations for multiple lan- plications. Examples include Word Sense Disam- guages. biguation (Miller et al., 2012), Information Re- trieval (Hliaoutakis et al., 2006), Machine Trans- 1.1 Subtask 1: Multilingual Semantic lation (Lavie and Denkowski, 2009), Lexical Sub- Similarity stitution (McCarthy and Navigli, 2009), Question While the English community has been using Answering (Mohler et al., 2011), Text Summa- standard word similarity datasets as a common rization (Mohammad and Hirst, 2012), and On- evaluation benchmark, semantic representation for tology Alignment (Pilehvar and Navigli, 2014). other languages has generally proved difficult to Moreover, word similarity is generally accepted as evaluate. A reliable multilingual word similar- the most direct in-vitro evaluation framework for ity benchmark can be hugely beneficial in eval- Authors marked with * contributed equally. uating the robustness and reliability of semantic 15 Proceedings of the 11th International Workshop on Semantic Evaluations (SemEval-2017), pages 15–26, Vancouver, Canada, August 3 - 4, 2017. c 2017 Association for Computational Linguistics representation techniques across languages. De- the dataset suffers from other issues. First, spite this, very few word similarity datasets ex- given that SimLex-999 has been annotated ist for languages other than English: The origi- by turkers, and not by human experts, the nal English RG-65 (Rubenstein and Goodenough, similarity scores assigned to individual word 1965) and WordSim-353 (Finkelstein et al., 2002) pairs have a high variance, resulting in rela- datasets have been translated into other languages, tively low IAA (Camacho-Collados and Nav- either by experts (Gurevych, 2005; Joubarne and igli, 2016). In fact, the reported IAA for this Inkpen, 2011; Granada et al., 2014; Camacho- dataset is 0.67 in terms of average pairwise Collados et al., 2015), or by means of crowdsourc- correlation, which is considerably lower than ing (Leviant and Reichart, 2015), thereby creat- conventional expert-based datasets whose ing equivalent datasets in languages other than En- IAA are generally above 0.80 (Rubenstein glish. However, the existing English word similar- and Goodenough, 1965; Camacho-Collados ity datasets suffer from various issues: et al., 2015). Second, similarly to many of the above-mentioned datasets, SimLex-999 does 1. The similarity scale used for the annotation of not contain named entities (e.g., Microsoft), WordSim-353 and MEN (Bruni et al., 2014) or multiword expressions (e.g., black hole). does not distinguish between similarity and In fact, the dataset includes only words that relatedness, and hence conflates these two. are defined in WordNet's vocabulary (Miller As a result, the datasets contain pairs that et al., 1990), and therefore lacks the ability are judged to be highly similar even if they to test the reliability of systems for WordNet are not of similar type or nature. For in- out-of-vocabulary words. Third, the dataset stance, the WordSim-353 dataset contains the contains a large number of antonymy pairs. pairs weather-forecast or clothes-closet with Indeed, several recent works have shown how assigned similarity scores of 8.34 and 8.00 significant performance improvements can be (on the [0,10] scale), respectively. Clearly, obtained on this dataset by simply tweaking the words in the two pairs are (highly) re- usual word embedding approaches to handle lated, but they are not similar. antonymy (Schwartz et al., 2015; Pham et al., 2015; Nguyen et al., 2016). 2. The performance of state-of-the-art systems have already surpassed the levels of human Since most existing multilingual word similar- inter-annotator agreement (IAA) for many ity datasets are constructed on the basis of con- of the old datasets, e.g., for RG-65 and ventional English datasets, any issues associated WordSim-353. This makes these datasets with the latter tend simply to be transferred to unreliable benchmarks for the evaluation of the former. This is the reason why we proposed newly-developed systems. this task and constructed new challenging datasets 3. Conventional datasets such as RG-65, MC- for five different languages (i.e., English, Farsi, 30 (Miller and Charles, 1991), and WS-Sim German, Italian, and Spanish) addressing all the (Agirre et al., 2009) (the similarity portion above-mentioned issues. Given that multiple large of WordSim-353) are relatively small, con- and high-quality verb similarity datasets have been taining 65, 30, and 200 word pairs, respec- created in recent years (Yang and Powers, 2006; tively. Hence, these benchmarks do not allow Baker et al., 2014; Gerz et al., 2016), we decided reliable conclusions to be drawn, since per- to focus on nominal words. formance improvements have to be large to 1.2 Subtask 2: Cross-lingual Semantic be statistically significant (Batchkarov et al., Similarity 2016). Over the past few years multilingual embeddings 4. The recent SimLex-999 dataset (Hill et al., that represent lexical items from multiple lan- 2015) improves both the size and consistency guages in a unified semantic space have garnered issues of the conventional datasets by provid- considerable research attention (Zou et al., 2013; ing word similarity scores for 999 word pairs de Melo, 2015; Vulic´ and Moens, 2016; Ammar on a consistent scale that focuses on simi- et al., 2016; Upadhyay et al., 2016), while at larity only (and not relatedness). However, the same time cross-lingual applications have also 16 been increasingly studied (Xiao and Guo, 2014; Animals Language and linguistics Art, architecture and archaeology Law and crime Franco-Salvador et al., 2016). However, there Biology Literature and theatre have been very few reliable datasets for evaluat- Business, economics, and finance Mathematics Chemistry and mineralogy Media ing cross-lingual systems. Similarly to the case of Computing Meteorology multilingual datasets, these cross-lingual datasets Culture and society Music Education Numismatics and currencies have been constructed on the basis of conven- Engineering and technology Philosophy and psychology tional English word similarity datasets: MC-30 Farming Physics and astronomy Food and drink Politics and government and WordSim-353 (Hassan and Mihalcea, 2009), Games and video games Religion, mysticism and mythology and RG-65 (Camacho-Collados et al., 2015). As Geography and places Royalty and nobility Geology and geophysics Sport and recreation a result, they inherit the issues affecting their par- Health and medicine Textile and clothing ent datasets mentioned in the previous subsection: Heraldry, honors, and vexillology Transport and travel History Warfare and defense while MC-30 and RG-65 are composed of only 30 and 65 pairs, WordSim-353 conflates similarity