Spying on Your Neighbors: Fine-Grained Probing of Contextual Embeddings for Information About Surrounding Words

Spying on Your Neighbors: Fine-Grained Probing of Contextual Embeddings for Information About Surrounding Words

Spying on your neighbors: Fine-grained probing of contextual embeddings for information about surrounding words Josef Klafka Allyson Ettinger Department of Psychology Department of Linguistics Carnegie Mellon University University of Chicago [email protected] [email protected] Abstract about what information these contextual embed- dings actually encode about the words around them. Although models using contextual word em- In a sentence like “The lawyer questioned the beddings have achieved state-of-the-art results judge”, does the contextual representation for ques- on a host of NLP tasks, little is known about tioned reflect properties of the subject lawyer? Of exactly what information these embeddings en- judge code about the context words that they are un- the object ? What determines the information derstood to reflect. To address this question, that a contextual embedding absorbs about its sur- we introduce a suite of probing tasks that en- rounding words? In this paper, we address these able fine-grained testing of contextual embed- questions by designing and implementing a suite dings for encoding of information about sur- of probing tasks, to test contextual embeddings rounding words. We apply these tasks to exam- for information about syntactic and semantic fea- ine the popular BERT, ELMo and GPT contex- tures of words in their contexts. We use controlled tual encoders, and find that each of our tested sentences of fixed structure, allowing us to probe information types is indeed encoded as contex- tual information across tokens, often with near- for information associated with word categories, perfect recoverability—but the encoders vary and to avoid confounds with particular vocabulary in which features they distribute to which to- items. We then apply these tests to examine the kens, how nuanced their distributions are, and distribution of contextual information across token how robust the encoding of each feature is to representations produced by contextual encoders distance. We discuss implications of these re- BERT (Devlin et al., 2019), ELMo (Peters et al., sults for how different types of models break 2018b), and GPT (Radford et al., 2018). down and prioritize word-level context infor- mation when constructing token embeddings. The contributions of this paper are twofold. First, we introduce a suite of novel probing tasks for test- 1 Introduction ing how encoders distribute contextual information across sentence tokens. All datasets and code are The field of natural language processing has re- available for follow-up testing.1 Second, we use cently seen impressive performance gains associ- these tests to shed light on the distribution of con- ated with the use of “contextual word embeddings”: text information in state-of-the-art encoders BERT, high-dimensional vectors that have access to infor- ELMo and GPT. We find that these models en- arXiv:2005.01810v1 [cs.CL] 4 May 2020 mation from the contexts of the words they repre- code each of our tested word features richly across sent. Models that use these contextual embeddings sentence tokens, often with perfect or near-perfect achieve state-of-the-art performance on a variety of recoverability, but the details of how the models natural language processing tasks, from question- distribute this information vary across encoders. In answering to natural language inference. As of particular, bidirectional models show more nuance writing, nearly all of the models on the SuperGLUE in information selectivity, while the deeper trans- leaderboard (Wang et al., 2019) use contextual em- former models show more robustness to distance. beddings in their architectures, most notably mod- Follow-up tests suggest that the effects cannot be els building on the BERT (Devlin et al., 2019) and chalked up to proximity, and that general word fea- Transformer XL (Dai et al., 2019) models. tures are encoded more robustly than word identity. Despite the clear power afforded by incorporat- 1Probing datasets and code available at ing context into word embeddings, little is known https://github.com/jklafka/context-probes. 2 Our approach almost at ceiling on syntactic tests. Testing BERT’s outputs on a range of semantic, syntactic and prag- Our tests address the following basic question: if matic information, Ettinger(2020) finds strong sen- we probe the contextual representation of a given sitivity to syntax, but clear limitations in areas of token in a sentence, how much information can we semantics and pragmatic/commonsense reasoning. recover about the other words in that sentence? For We complement this work with a direct focus on the example, if we create a contextual embedding for contextual token representations learned by mod- the word questioned in the sentence els pre-trained on language modeling, examining The lawyer questioned the judge the syntactic and semantic information that these how well can we extract information about the sub- embeddings capture about surrounding words. ject noun (lawyer)? What if we probe the object Most directly related to the present work are noun (judge) or determiners (the)? We develop studies using probing and other methods to analyze tasks to probe representations of each word for var- information in contextual token embeddings. Some ious types of information about the other words of this research (e.g. Tenney et al., 2019a; Jawa- of the sentence, allowing us to examine with fine har et al., 2019) finds that BERT encodes more granularity how contextual encoders distribute in- local, syntactic information at lower layers and formation about surrounding words. We complete more global, semantic information at higher lay- this investigation for each word in a set of fixed- ers. Peters et al.(2018a) find that encoders differ length sentences of pre-determined form, which in encoding strength for semantic features but all allows us to characterize behaviors based on word encode these features strongly where possible. He- categories (e.g., subjects versus verbs). Using this witt and Manning(2019) provide evidence that approach, we can examine how the distribution of contextual encoders capture sentence-level hierar- context information is impacted by a) the type of chical syntactic structures in their representations. information being encoded, and b) the properties Other work (Liu et al., 2019; Tenney et al., 2019b) of the word that the embedding corresponds to. finds that contextual word encoders struggle to learn fine-grained linguistic information in a variety 3 Related work of contexts. These papers have focused primarily on studying the ability of contextual embeddings Much work has been done on analyzing the in- to capture information about the full sentence, or formation captured by sentence encoders and lan- about phrases or dependencies of which those con- guage models in general. Classification-based prob- textual embeddings are a part. We focus on map- ing tasks have been used to analyze the contents of ping the precise distribution of context information sentence embeddings (Adi et al., 2016; Conneau across token embeddings, with a systematic, fine- et al., 2018; Ettinger et al., 2016), finding that these grained investigation of the information that each embeddings encode a variety of information about token encodes about each of its surrounding tokens. sentence structure, content, length, etc., though more tightly-controlled tasks suggest weaknesses 4 Probing for contextual information in capturing basic sentence meaning (Ettinger et al., 2018). Our work uses the same classification- For each of our probing tasks, we test for a particu- based probing methodology, but focuses on probing lar information type, formulated as a query about a token-level embeddings for context information. particular target word in the sentence—for instance, Other work has analyzed linguistic capacities “What is the animacy of the subject?” or “What is of language models by examining output probabil- the tense of the verb?”. We then apply these queries ities in context, emulating methods for studying to probe the embeddings for each word of the sen- human language processing. Much of this work tence in turn—we call this the probed word. For has studied sensitivity to syntactic dependencies example, a test with a probed word of verb, a target in recurrent neural network language models (e.g. word of subject noun, and an information type of Linzen et al., 2016; Wilcox et al., 2018; Chowd- animacy would ask the question: “What does the hury and Zamparelli, 2018; Gulordava et al., 2018; embedding of the verb tell us about the animacy Marvin and Linzen, 2018; Futrell et al., 2019). Us- of the subject noun?” We implement each test as a ing similar methods to test syntactic awareness in binary classification task (e.g., “animate” vs “inan- BERT, Goldberg(2019) finds the model to perform imate”), and train and test a multi-layer perceptron Task Example Label (subject) Number The lawyer betrayed the judge. SINGULAR The lawyers betrayed the judge. PLURAL (subject) Gender The waiter betrayed the judge. MASCULINE The waitress betrayed the judge. FEMININE (subject) Animacy The car betrayed the judge. INANIMATE The turtle betrayed the judge. ANIMATE Table 1: Example items from probing tasks for each noun information type. classifier using the embeddings of one probed word to ensure that no word but the target noun indicates category at a time as input for the task. In this sec- the number information of interest. tion, we describe the details of our probing datasets For each task we generate 4000 training and and tested information types. 1000 test transitive sentences. We generate sep- arate datasets for each target word within an in- 4.1 Dataset construction formation type—for example, generating separate subject animacy and object animacy datasets. We construct our datasets using generated transitive sentences with a fixed five-word structure: “DET 4.2 Information types SUBJ-N VB DET OBJ-N”, as in “The lawyer ques- We probe for three types of linguistic information tioned the judge”.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    11 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us