Multi-Sense Language Modelling

Multi-Sense Language Modelling

Multi-Sense Language Modelling Andrea Lekkas1;2;3, Peter Schneider-Kamp1, Isabelle Augenstein2 1Dept. of Mathematics and Computer Science, University of Southern Denmark 2Ordbogen A/S, Odense, Denmark 3Dept. of Computer Science, University of Copenhagen, Denmark {lekkas|petersk}@imada.sdu.dk, [email protected] Abstract Sentence Meaning The effectiveness of a language model is in- “John sat on the bank.n.01: sloping land, espe- fluenced by its token representations, which bank of the river cially the slope beside a body of and watched the water must encode contextual information and han- currents” dle the same word form having a plurality of “Jane went to bank.n.02: a financial institu- meanings (polysemy). Currently, none of the the bank to dis- tion that accepts deposits and common language modelling architectures ex- cuss the mort- channels the money into lending plicitly model polysemy. We propose a lan- gage” activities guage model which not only predicts the next word, but also its sense in context. We argue Table 1: Example of polysemy. Senses taken from that this higher prediction granularity may be WordNet 3.0 useful for end tasks such as assistive writing, and allow for more a precise linking of lan- Ruder, 2018), ELMo (Peters et al., 2018), and all guage models with knowledge bases. We find that multi-sense language modelling requires Transformer architectures. However, even for con- architectures that go beyond standard language textual embeddings, polysemy is handled in an models, and here propose a structured predic- implicit, non-discrete way: the sense that a word tion framework that decomposes the task into assumes in a particular occurrence is unspecified. a word followed by a sense prediction task. To Here, we propose the task of multi-sense lan- aid sense prediction, we utilise a Graph Atten- guage modelling, consisting of not only word, but tion Network, which encodes definitions and also sense prediction. We conjecture that multi- example uses of word senses. Overall, we find that multi-sense language modelling is a sense language modelling would: highly challenging task, and suggest that fu- 1) improve the precision of linking a language ture work focus on the creation of more anno- tated training datasets. model to a knowledge base, as done in Logan et al.(2019), to help generate factually cor- 1 Introduction rect language. For instance: “The explorers Any variant of language model, whether standard descended in the cave and encountered a bat” left-to-right, masked (Devlin et al., 2019) or bidi- refers to the entity ‘bat (animal)’ and not to rectional (Peters et al., 2018) has to address the ‘bat (baseball implement)’. 2) be useful in applications such as assistive writ- arXiv:2012.05776v2 [cs.CL] 13 Sep 2021 problem of polysemy: the same word form having multiple meanings, as seen in Tab.1. The meaning ing (Chen et al., 2012), where it is desirable of a particular occurrence depends on the context, to display more information about a word to a and all modern language modelling architectures user, such as its sense, definition or usage. from simple RNNs (Mikolov et al., 2010) to Trans- Another potential use would be to explore if formers (Vaswani et al., 2017) use context-based such dictionary information could improve stan- representations. However, token representations in dard language modelling, and reduce the number language models are not explicitly disambiguated. of training data needed, relevant for e.g. language Single-prototype embeddings, i.e., traditional word modelling for low-resource languages. vectors, have a 1-to-1 correspondence with word Consequently, our research objectives are to: forms. Contextual embeddings change depending on the tokens in their context window, and are em- • model next-sense prediction as a task along- ployed in recent models like ULMFit (Howard and side standard next-word prediction in lan- guage modelling, and examine the perfor- multi-sense embeddings in downstream tasks such mance of different model architectures. as Semantic Relatedness and PoS tagging. • encode sense background knowledge from sense definitions and examples, and examine Supervised multi-sense embeddings Other how it can aid sense prediction. models rely on sense-label supervision in a text corpus. context2vec (Melamud et al., 2016) As a sense inventory, we use WordNet 3.0 builds contextual word embeddings by applying (Miller, 1995). The sense background knowledge a biLSTM on text, and provides the option to is encoded in a dictionary graph, as shown in Fig. create sense embeddings by using a sense-labelled 3. When reading a word w, the model can rely on corpus. Raganato et al.(2017) frames WSD as a an additional input signal: the state of the node that sequence learning problem, with the aim of finding represents w in the dictionary graph (the “global sense labels for an input sequence. The training node”). Node vectors in the graph are updated by a corpus is the same used in our work, SemCor Graph Attention Network (Velickoviˇ c´ et al., 2018). (Miller et al., 1993), and the core architecture Findings: We find that sense prediction is a sig- is a biLSTM that reads both the preceding and nificantly more difficult task than standard word subsequent context. A biLSTM is also employed prediction. A way to tackle it is to use a structured in LSTMEmbed (Iacobacci and Navigli, 2019), prediction framework, where the next sense de- that obtains a sense-labelled training corpus by pends on the prediction of the next word. The most applying the BabelFly (Moro et al., 2014) sense successful model we identified for this uses a hard tagger on the English Wikipedia and other texts. cut-off for the number of words considered (Se- Most recently, SenseBERT (Levine et al., 2020) lectK, see § 4.4). The additional input signal from uses WordNet supersenses (categories like food, the dictionary graph provides only marginal im- artifact, person, etc.) to add a form of sense- provements. For future work, we argue that larger prediction to BERT: an additional objective re- sense-labelled datasets would go a long way to- quires the model to predict one of the supersenses wards improving the overall performance of multi- that the word w can assume. sense language models. Sense representations can leverage WordNet glosses, as done in Chen et al.(2014) after single- 2 Related Work prototype vectors are trained with a skip-gram We here briefly discuss relevant works that disam- model. Likewise, pre-trained word embeddings biguate between word senses to address polysemy. are the starting point for AutoExtend (Rothe and They can be grouped in three categories: 1) multi- Schütze, 2015), an autoencoder architecture where prototype embeddings not connected to a knowl- word embeddings constitute the input and the target edge base; 2) supervised multi-sense embeddings whereas the embeddings for WordNet synsets are based on a text corpus that only utilise a KB tan- the intermediate encoded representation. Kumar gentially as the sense inventory; and 3) models that et al.(2019) use a BiLSTM and self-attention to get rely more substantially on features from a KB, like contextual embeddings, then disambiguate them glosses or semantic relations. via a dot product with sense embeddings based on WordNet definitions. Multi-prototype embeddings Huang et al. (2012) learn multi-prototype vectors by clustering KB-based methods In the last couple of years, word context representations. Single-prototype efforts have been made to enable BERT to disam- embeddings are determined by 2 FF-NNs with biguate between senses. GlossBERT (Huang et al., a margin objective on predicting the next word, 2019) takes in context-gloss pairs as input: for ev- a quasi-language modelling setting even if it ery target word in a context sentence, N=4 WordNet utilises both the preceding and subsequent context. glosses are found; a classification layer determines Multi-sense skip-gram (Neelakantan et al., 2014) which lemma the word assumes. SenseEmBERT also defines senses as cluster centroids, measuring (Scarlini et al., 2020a) relies on the BabelNet map- the cosine distance of the surrounding context of ping between WordNet and Wikipedia pages to ±5 words. Li and Jurafsky(2015) use Chinese collect relevant text for synsets, and computes the Restaurant Processes to decide whether to create a sense embeddings as a rank-weighted average of new cluster, and also investigate the usefulness of relevant synsets. Scarlini et al.(2020b) applies the same pipeline of of context extraction, synset em- beddings and sense embeddings construction: it computes lemma representations via BERT, and uses UKB (Agirre et al., 2014) to create a set of contexts for the synset. Sense embeddings are ob- tained by concatenating the BERT representation of the sense contexts found in SemCor and the sense definition in WordNet. Our model We use pre-trained embeddings and WordNet glosses and relations to create a dictionary graph. The vectors of sense nodes can be viewed as sense embeddings, although we are not primarily interested in their quality since our main objective Figure 1: Overview of the SelectK version of the model is multi-sense language modelling. Future work that uses a Transformer-XL for the StandardLM task. may rely on them to improve sense disambiguation: for model variants that have to choose the correct sense among a limited number of candidates, there is an opening for the application of more complex multi-sense models than the ones explored here. 3 Multi-Sense Language Model 3.1 Architecture A language modelling task decomposes the proba- bility of predicting an entire text of length N as the product of each word prediction, where the proba- bility of the next word p(wi) is influenced by the preceding context [w1; :::; wi−1]: Figure 2: The input signals for graph-informed sense N prediction: the standard word embedding and the vec- Y tor of the global node from the dictionary graph p(w1; :::; wN ) = p(wijw1; :::; wi−1) (1) i=1 Our model aims to carry out two language mod- to examine the effectiveness of sense architectures elling tasks: independently from the accuracy of the standard 1.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    13 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us