Definition Modeling: Learning to Define Word Embeddings In

Definition Modeling: Learning to Define Word Embeddings In

Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17) Definition Modeling: Learning to Define Word Embeddings in Natural Language Thanapon Noraset, Chen Liang, Larry Birnbaum, Doug Downey Department of Electrical Engineering & Computer Science Northwestern University, Evanston IL 60208, USA {nor, chenliang2013}@u.northwestern.edu, {l-birnbaum,d-downey}@northwestern.edu Abstract Word Generated definition brawler a person who fights Distributed representations of words have been shown to cap- butterfish a marine fish of the atlantic coast ture lexical semantics, as demonstrated by their effectiveness continually in a constant manner in word similarity and analogical relation tasks. But, these creek a narrow stream of water tasks only evaluate lexical semantics indirectly. In this paper, we study whether it is possible to utilize distributed represen- feminine having the character of a woman tations to generate dictionary definitions of words, as a more juvenility the quality of being childish direct and transparent representation of the embeddings’ se- mathematical of or pertaining to the science of mantics. We introduce definition modeling, the task of gen- mathematics erating a definition for a given word and its embedding. We negotiate to make a contract or agreement present several definition model architectures based on recur- prance to walk in a lofty manner rent neural networks, and experiment with the models over resent to have a feeling of anger or dislike multiple data sets. Our results show that a model that controls similar having the same qualities dependencies between the word being defined and the defini- valueless not useful tion words performs significantly better, and that a character- level convolution layer designed to leverage morphology can complement word-level embeddings. Finally, an error analy- Table 1: Selected examples of generated definitions. The sis suggests that the errors made by a definition model may model has been trained on occurrences of each example provide insight into the shortcomings of word embeddings. word in running text, but not on the definitions. 1 Introduction word similarity and analogical relation tasks, definition gen- Distributed representations of words, or word embeddings, eration can be considered a more transparent view of the are a key component in many natural language processing syntax and semantics captured by an embedding. We intro- (NLP) models (Turian, Ratinov, and Bengio 2010; Huang duce definition modeling: the task of estimating the prob- et al. 2014). Recently, several neural network techniques ability of a textual definition, given a word being defined have been introduced to learn high-quality word embed- and its embedding. Specifically, for a given set of word em- dings from unlabeled textual data (Mikolov et al. 2013a; beddings, a definition model is trained on a corpus of word Pennington, Socher, and Manning 2014; Yogatama et al. and definition pairs. The models are then tested on how well 2015). Embeddings have been shown to capture lexical they model definitions for words not seen during the train- syntax and semantics. For example, it is well-known that ing, based on each word’s embedding. nearby embeddings are more likely to represent synony- The definition models studied in this paper are based mous words (Landauer and Dumais 1997) or words in the on recurrent neural network (RNN) models (Elman 1990; same class (Downey, Schoenmackers, and Etzioni 2007). Hochreiter and Schmidhuber 1997). RNN models have es- More recently, the vector offsets between embeddings have tablished a new state-of-the-art performance on many se- been shown to reflect analogical relations (Mikolov, Yih, and quence prediction and natural language generation tasks Zweig 2013). However, tasks such as word similarity and (Cho et al. 2014; Sutskever, Vinyals, and Le 2014; Karpathy analogy only evaluate an embedding’s lexical information and Fei-Fei 2014; Wen et al. 2015a). An important charac- indirectly. teristic of dictionary definitions is that only a subset of the In this work, we study whether word embeddings can be words in the definition depend strongly on the word being used to generate natural language definitions of their corre- defined. For example, the word “woman” in the definition sponding words. Dictionary definitions serve as direct and of “feminine” in Table 1 depends on the word being defined explicit statements of word meaning. Thus, compared to the than the rest. To capture the varying degree of dependency, Copyright c 2017, Association for the Advancement of Artificial we introduce a gated update function that is trained to con- Intelligence (www.aaai.org). All rights reserved. trol information of the word being defined used for generat- 3259 ing each definition word. Furthermore, since the morphemes the image caption generation (Karpathy and Fei-Fei 2014) of the word being defined plays a vital role in the definition, and spoken dialog generation (Wen et al. 2015a) are also we experiment with a character-level convolutional neural related to our work, in that a sequence of words is gener- network (CNN) to test whether it can provide complemen- ated from a single input vector. Our model architectures are tary information to the word embeddings. Our best model inspired by sequence-to-sequence models (Cho et al. 2014; can generate fluent and accurate definitions as shown in Ta- Sutskever, Vinyals, and Le 2014), but definition modeling is ble 1. We note that none of the definitions in the table exactly distinct, as it is a word-to-sequence task. match any definition seen during training. Our contributions are as follows: (1) We introduce the 3 Dictionary Definitions definition modeling task, and present a probabilistic model In this section, we first investigate definition content and for the task based on RNN language models. (2) In experi- structure through a study of existing dictionaries. We then ments with different model architectures and word features, describe our new data set, and define our tasks and metrics. we show that the gate function improves the perplexity of a ∼10% RNN language model on definition modeling task by , 3.1 Definition Content and Structure and the character-level CNN further improves the perplexity by ∼5%. (3) We also show that the definition models can be In existing dictionaries, individual definitions are often com- use to perform the reverse dictionary task studied in previ- prised of genus and differentiae (Chodorow, Byrd, and Hei- ous work, in which the goal is to match a given definition to dorn 1985; Montemagni and Vanderwende 1992). The genus its corresponding word. Our model achieves an 11.8% abso- is a generalized class of the word being defined, and the dif- lute gain in accuracy compared to previous state-of-the-art ferentiae is what makes the word distinct from others in the by Hill et al. (2016). (4) Finally, our error analysis shows same class. For instance, that a well-trained set of word embeddings pays significant Phosphorescent: emitting light without appreciable role in the quality of the generated definitions, and some of heat as by slow oxidation of phosphorous error types suggest shortcomings of the information encoded “emitting light” is a genus, and “without applicable heat in the word embeddings. ...” is a differntiae. Furthermore, definitions tend to include common patterns such as “the act of ...” or “one who has 2 Previous Work ...” (Markowitz, Ahlswede, and Evens 1986). However, the Our goal is to investigate RNN models that learns to de- patterns and styles are often unique to each dictionary. fine word embeddings by training on examples of dictionary The genus + differentiae (G+D) structure is not the only definitions. While dictionary corpora have been utilized ex- pattern for definitions. For example, the entry below exhibits tensively in NLP, to the best of our knowledge none of the distinct structures. previous work has attempted create a generative model of definitions. Early work focused on extracting semantic in- Eradication: the act of plucking up by the roots; a root- formation from definitions. For example, Chodorow (1985), ing out; extirpation; utter destruction and Klavans and Whitman (2001) constructed a taxonomy This set of definitions includes a synonym (“extirpation”), of words from dictionaries. Dolan et al. (1993) and Vander- a reverse of the G+D structure (“utter destruction”), and an wende et al. (2005) extracting semantic representations from uncategorized structure (“a rooting out”). definitions, to populate a lexical knowledge base. In distributed semantics, words are represented by a dense 3.2 Corpus: Preprocessing and Analysis vector of real numbers, rather than semantic predicates. Dictionary corpora are available in a digital format, but are Recently, dictionary definitions have been used to learn designed for human consumption and require preprocessing such embeddings. For example, Wang et al. (2015) used before they can be utilized for machine learning. Dictionar- words in definition text as a form of “context” words for ies contain non-definitional text to aid human readers, e.g. the Word2Vec algorithm (Mikolov et al. 2013b). Hill et al. the entry for “gradient” in Wordnik1 contains fields (“Math- (2016) use dictionary definitions to model compositional- ematics”) and example usage (“as, the gradient line of a rail- ity, and evaluate the models with the reverse dictionary task. road.”). Further, many entries contain multiple definitions, While these works learn word or phrase embeddings from usually (but not always) separated by “;”. definitions, we only focus on generating definitions from ex- We desire a corpus in which each entry contains only isting (fixed) embeddings. Our experiments show that our a word being defined and a single definition. We parse models outperform those of Hill et al. (2016) on the reverse dictionary entries from GCIDE2 and preprocess Word- dictionary task. Net’s glosses, and the fields and usage are removed. Our work employs embedding models for natural lan- The parsers and preprocessing scripts can be found at guage generation.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us