Journal of Artificial Intelligence Research 55 (2016) 953-994 Submitted 09/15; published 04/16 Bilingual Distributed Word Representations from Document-Aligned Comparable Data Ivan Vuli´c
[email protected] University of Cambridge Department of Theoretical and Applied Linguistics 9 West Road, CB3 9DP, Cambridge, UK Marie-Francine Moens
[email protected] KU Leuven Department of Computer Science Celestijnenlaan 200A, 3001 Heverlee, Belgium Abstract We propose a new model for learning bilingual word representations from non-parallel document-aligned data. Following the recent advances in word representation learning, our model learns dense real-valued word vectors, that is, bilingual word embeddings (BWEs). Unlike prior work on inducing BWEs which heavily relied on parallel sentence-aligned cor- pora and/or readily available translation resources such as dictionaries, the article reveals that BWEs may be learned solely on the basis of document-aligned comparable data with- out any additional lexical resources nor syntactic information. We present a comparison of our approach with previous state-of-the-art models for learning bilingual word repre- sentations from comparable data that rely on the framework of multilingual probabilistic topic modeling (MuPTM), as well as with distributional local context-counting models. We demonstrate the utility of the induced BWEs in two semantic tasks: (1) bilingual lexicon extraction, (2) suggesting word translations in context for polysemous words. Our simple yet effective BWE-based models significantly outperform the MuPTM-based and context- counting representation models from comparable data as well as prior BWE-based models, and acquire the best reported results on both tasks for all three tested language pairs.