An Exploration of the Encoding of Grammatical Gender in Word Embeddings

An Exploration of the Encoding of Grammatical Gender in Word Embeddings

An exploration of the encoding of grammatical gender in word embeddings Hartger Veeman Ali Basirat Dep. of Linguistics and Philology Dep. of Linguistics and Philology Uppsala University Uppsala University [email protected] ali.basirat@lingfil.uu.se Abstract encoded in word embeddings is of interest because it can contribute to the discussion of how nouns The vector representation of words, known as of a language are classified into different gender word embeddings, has opened a new research categories. approach in linguistic studies. These represen- tations can capture different types of informa- This paper explores the presence of information tion about words. The grammatical gender of about grammatical gender in word embeddings nouns is a typical classification of nouns based from three perspectives: on their formal and semantic properties. The study of grammatical gender based on word 1. Examine how such information is encoded embeddings can give insight into discussions across languages using a model transfer be- on how grammatical genders are determined. tween languages. In this study, we compare different sets of word embeddings according to the accuracy of 2. Determine the classifier’s reliance on seman- a neural classifier determining the grammati- tic information by using contextualized word cal gender of nouns. It is found that there is an embeddings. overlap in how grammatical gender is encoded in Swedish, Danish, and Dutch embeddings. 3. Test the effect of gender agreements between Our experimental results on the contextualized nouns and other words categories on the infor- embeddings pointed out that adding more con- mation encoded into word embeddings. textual information to embeddings is detrimen- tal to the classifier’s performance. We also 2 Grammatical gender observed that removing morpho-syntactic fea- tures such as articles from the training corpora Grammatical gender is a nominal classification of embeddings decreases the classification per- system found in many languages. In languages formance dramatically, indicating a large por- with grammatical gender, the noun forms an agree- tion of the information is encoded in the rela- ment with another language aspect based on the tionship between nouns and articles. noun class. The most common grammatical 1 Introduction gender divisions are masculine/feminine, mascu- line/feminine/neuter, and uter/neuter, but many The creation of embedded word representations other divisions exist. has been a key breakthrough in natural language Gender is assigned based on a noun’s meaning processing (Mikolov et al., 2013; Pennington et al., and form (Basirat et al., in press). Gender assign- 2014; Bojanowski et al., 2017; Peters et al., 2018a; ment systems vary between languages and can be Devlin et al., 2019). Words embeddings have based solely on meaning (Corbett, 2013). The ex- proven to capture information relevant to many periments in this paper concern grammatical gen- tasks within the field. Basirat and Tang(2019) have der in Swedish, Danish, and Dutch. shown that word embeddings contain information Nouns in Swedish and Danish can have one of about the grammatical gender of nouns. In their two grammatical genders, uter, and neuter indicated study, a classifier’s ability to classify embeddings through the article for indefinite form and the suffix on grammatical gender is interpreted as an indica- for the definite and plural form. Dutch nouns can tion of the presence of information about gender technically be classified as masculine, feminine, or in word embeddings. The way this information is neuter. However, in practice, the masculine and feminine genders are only used for nouns referring Table 1: Accuracy for model (vertical) applied to test to people, with the distinction between masculine set (horizontal) and feminine being the subject and object pronouns, while in other cases, all non-neuter nouns can be SV DA NL categorized as uter. Grammatical gender in Dutch SV 93.55 73.89 73.37 is indicated in the definite article, adjective agree- ment, pronouns, and demonstratives. DA 81.18 91.81 78.50 NL 71.32 78.54 93.34 3 Multilingual embeddings and model transferability Table 2: Corrected accuracy for model (vertical) ap- plied to test set (horizontal) The first experiment explores if grammatical gen- der is encoded in a similar way between different SV DA NL languages. For this experiment, we leverage the SV 32.03 15.25 11.37 multilingual word embeddings. Multilingual word embeddings are word embeddings that are mapped DA 22.54 35.33 19.50 to the same latent space so that the vector of words NL 9.32 19.54 31.15 that have a similar meaning has a high similarity regardless of the source language. The aligned word embeddings allow for the model transfer of 3.2 Results the neural classifier. This is done by applying the To isolate how much of the accuracy of a model is neural classifier model to a different language than caused by successfully applying learned patterns it is trained on. If the model effectively classifies from the source language to the target language embeddings from a different language, it is likely rather than successful random guessing, the follow- grammatical gender is encoded in the embeddings ing baseline is defined: of two languages similarly. X p(gs)p(gt) 3.1 Data & Experiment g2fu;ng This baseline is the chance the model would make a The lemma form of all unique nouns and their gen- correct guess purely based on the class distributions ders were extracted from Universal Dependencies of the training data and test data. Subtracting this treebanks (Nivre et al., 2016). This resulted in chance from the results should give an indication about 4.5k, 5.5k, and 7.2k nouns in Danish (da), of how much information in the model was trans- Swedish (sv), and Dutch (nl), respectively. Ten per- ferable to the task of classifying test data in another cent of each data set was sampled randomly to be language. The absolute results are displayed in ta- used as test data. For each noun, the embeddings ble 1 while the results corrected with this baseline was extracted from pre-trained aligned word em- are displayed in table 2. beddings published in the MUSE project (Conneau All transfers yielded a result above the random et al., 2017). The uter/neuter class distributions are guessing baseline. This means the classifier is 74/26 for Swedish, 68/32 for Danish, and 75/25 for learning patterns in the source language data that Dutch. are applicable to the target language data. From The classification is performed using a feed- this, it can be concluded that grammatical gender forward neural network. The network has an input is encoded similarly between these languages to a size of 300 and a hidden layer size of 600. The certain degree. loss function used is binary cross-entropy loss. The The performance of the Swedish model for clas- training is run until no further improvement is ob- sifying Dutch grammatical gender and vice versa is served. the lowest of all language pairs. This language pair For every language, a network was trained using is also the pair with the largest phylogenetic lan- the data described above. Every model was then guage distance. The language pair with the smallest applied to its own test data and the other languages’ distance is Danish/Swedish. The Danish model ap- test data to test the model’s transferability. plied to the Swedish test data produces the best result of all model transfers, achieving an accuracy Table 3: Accuracy and loss for different layers of the of 21:06% over the baseline. This observation on ELMO model the inverse relationship between the language dis- tance and the gender transferability agrees with the Loss Accuracy broader experiments performed by (Veeman et al., Word representation layer 0.168 93.82 2020). Interestingly, the success of the model transfer is First layer 0.260 92.13 not symmetrical. This is most clear in the Swedish- Second layer 0.274 91.24 Danish pair which achieve an accuracy of 22:54% over baseline from Danish to Swedish, but only 15:25% over baseline from Swedish to Danish. The 4.2 Data & Experiment patterns the Danish model learns are more applica- The comparison was performed using a Swedish ble to the Swedish data than vice versa. pre-trained ELMo model (Che et al., 2018). The nouns and their gender labels were extracted from 4 Contextual word embeddings UD treebanks. The noun embeddings were gener- Adding contextual information to a word embed- ated using their treebank sentence as context. The ding model has proven to be effective for semantic output is collected at the word representation layer, tasks like named entity recognition or semantic role the first contextualized layer, and the second con- labeling (Peters et al., 2018b). This addition of se- textualized (output) layer. This resulted in a set of mantic information can be leveraged to measure a little over 10k embeddings for every layer, from its influence on the gender classification task. A which 10% was randomly sampled and split as test contextualized word embedding model was used data. The embeddings have a size of 1024, and to quantify the role of semantic information in the the hidden layer size of the classifier has a size noun classification. double the input size, 2048. The results for this comparison are shown in table 2. 4.1 Embeddings from Language Models (ELMo) 4.3 Results The ELMo model (Peters et al., 2018a) is a multi- An apparent decrease in accuracy and an increase layer biRNN that leverages pre-trained deep bidi- in loss is observed when classifying the gender of rectional language models to create contextual the contextualized word embeddings.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us