Robust Named Entity Recognition with Truecasing Pretraining

Robust Named Entity Recognition with Truecasing Pretraining

The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) Robust Named Entity Recognition with Truecasing Pretraining Stephen Mayhew, Nitish Gupta, Dan Roth University of Pennsylvania Philadelphia, PA, 19104 {mayhew, nitishg, danroth}@seas.upenn.edu Abstract eliud kipchoge has won in london again. Although modern named entity recognition (NER) systems show impressive performance on standard datasets, they per- form poorly when presented with noisy data. In particular, capitalization is a strong signal for entities in many lan- Truecaser guages, and even state of the art models overfit to this feature, with drastically lower performance on uncapitalized text. In this work, we address the problem of robustness of NER sys- Eliud Kipchoge has won in London again. tems in data with noisy or uncertain casing, using a pretrain- ing objective that predicts casing in text, or a truecaser, lever- aging unlabeled data. The pretrained truecaser is combined with a standard BiLSTM-CRF model for NER by appending Figure 1: This figure shows the same sentence in two cas- output distributions to character embeddings. In experiments ing scenarios: on the top, uncased, on the bottom, truecased. over several datasets of varying domain and casing quality, Notice that the words marked as uppercase are also named we show that our new model improves performance in un- entities. This paper explores how truecasing can be used in cased text, even adding value to uncased BERT embeddings. named entity recognition. Our method achieves a new state of the art on the WNUT17 shared task dataset. preprocessing regimen that biases the data towards named Introduction entity mentions. In our experiments, we show that truecasing remains a Modern named entity recognition (NER) models perform re- difficult task, and further that although a perfect truecaser markably well on standard English datasets, with F1 scores gives high NER scores, the quality of the truecaser does not 90% over (Lample et al. 2016). But performance on these necessarily correlate with NER performance. Even so, in- standard datasets drops by over 40 points F1 when cas- corporating predicted casing information from the right true- ing information is missing, showing that these models rely caser improves performance in both cased and uncased data, strongly on the convention of marking proper nouns with even when used in conjunction with uncased BERT (Devlin capitals, rather than on contextual clues. Since text in the et al. 2019). We evaluate on three NER datasets (CoNLL, wild is not guaranteed to have conventional casing, it is im- Ontonotes, WNUT17) in both cased and uncased scenarios, portant to build models that are robust to test data case. and achieve a new state of the art on WNUT17. One possible solution to keep NER systems from over- fitting to capitalization is to omit casing information. This Truecaser Model would potentially lead to better performance on uncased text, but fails to take advantage of an important signal. A truecaser model takes a sentence as input and predicts case We build on prior work in truecasing (Susanto, Chieu, and values (upper vs. lower) for all characters in the sentence. Lu 2016) by proposing to use a truecaser to predict miss- We model truecasing as a character-level binary classifica- ing case labels for words or characters (as shown in Figure tion task, similar to Susanto, Chieu, and Lu (2016). The la- 1). Intuitively, since named entities are marked with capi- bels, L for lower and U for upper, are predicted by a feedfor- tals, a trained truecaser should improve an NER system. We ward network using the hidden representations of a bidirec- design an architecture in which the truecaser output is fed tional LSTM (BiLSTM) that operates over the characters in into a standard BiLSTM-CRF model for NER. We experi- a sentence. For example, in Figure 2, the truecaser predicts ment with pretraining the truecaser, and fine-tuning on the “name was Alan” for the input “name was alan”. Formally, NER train set. When designing the truecaser, we suggest a the output of this model is a categorical distribution dc over two values (true/false), for each character c, Copyright c 2020, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. dc = softmax(Whc) 8480 tion of pretrained word embeddings (such as GloVe; (Pen- L L L L L L L U L L L nington, Socher, and Manning 2014)) and encoded charac- ter embeddings. Given the popularity of this model, we omit a detailed description, and refer interested readers to prior B-LSTM work (Chiu and Nichols 2016; Lample et al. 2016). Our proposed model augments the standard BiLSTM at the character level, appending truecaser predictions (dc, n a m e w a s a l a n from above) to each character embedding. Formally, let the input embedding for token t be xt. w Figure 2: Diagram of our truecasing model. The input is x = x t f(c ,c , ..., c ) (1) characters, and the output hidden states from the BiLSTM 1 2 |xt| are used to predict binary labels, ‘U’ for upper case, and Here, word embeddings wx are concatenated with an em- ‘L’ for lower case. A sentence fragment is shown, but the bedding generated by f(.), which is some encoder function B-LSTM takes entire sentences as input. c over character embeddings 1:|xt| representing characters in xt. This is usually either a BiLSTM, or, in our case, a convo- char+case lutional neural network (CNN). Even if the word vectors are O O B-PER vector uncased, casing information may be encoded on the charac- ter level through this mechanism. CNN CRF For any xt, our model extends this framework to include char vectors predictions from a truecaser for all characters, as a concate- B-LSTM case nation of character embedding ci with the corresponding vectors truecaser prediction di. word vectors ci vi = (2) name was alan B-LSTM di Now, the input vector to the BiLSTM is: a l a n w x = x t f(v ,v , ..., v ) (3) 1 2 |xt| Figure 3: Diagram of our NER model. The left side shows In this way, each character is now represented by a vec- a standard BiLSTM CRF model with word vectors (red) tor which encodes the value of the character (such as ‘a’, or concatenated with character vectors (purple). The right side ‘p’), and also a distribution over its predicted casing. This shows how the character vectors are created. The gray separation of character and casing value has also been used shaded area is our truecaser, with parameters frozen. This in Chiu and Nichols (2016) and Collobert et al. (2011). The produces 2-dimensional case predictions (in blue) for each distinction of our model is that we explicitly use the pre- character. These are concatenated with learned character dicted casing value (and not the true casing in the data). The embeddings (in green) before going to a CNN for encoding. reason for using predicted casing is to make the NER model robust to noisy casing during test time. Also, an NER sys- H tem using gold-casing while training might overfit to quirks where hc ∈ R is the hidden representation for the c-th W ∈ R2×H of the training data, whereas using predicted casing will help character from the BiLSTM, and represents a the model to learn from its own mistakes and to better pre- learnable feed-forward network. In addition to the parame- dict the correct NER tag. ters of the LSTM and the feedforward network, the model A diagram of our model is shown in Figure 3. We refer also learns character embeddings. Generating training data to the two-dimensional addition to each character vector as for this task is trivial and does not need any manual label- “case vectors.” Notice that there are two separate character ing; we lower-case text to generate input instances for the vectors present in the model: those that are learned by the truecaser, and use the original case values as the gold labels truecaser and those that are learned in the NER model (char to be predicted by the model. The model also uses whites- vectors). We chose to keep these separate because it’s not pace characters as input for which the gold label is always clear that the truecaser character vectors encode what the lowercase (L) (not shown in Figure 2). NER model needs, and they need to stay fixed in order to work properly in the truecaser model. Proposed NER Model In our experiments, we found that it is best to detach the In NER literature, the standard model is the BiLSTM-CRF. truecaser predictions from the overall computation graph. In this model, token embeddings are given as input to a This means that the gradients propagated back from the BiLSTM, and the output hidden representations are in turn NER tag loss do not flow into the truecaser. Instead, we al- passed as features to a Conditional Random Field (CRF). low parameters in the truecaser to be modified through two The BiLSTM input embeddings are typically a concatena- avenues: pretrained initialization, and truecaser fine-tuning. 8481 Dataset Train Dev Test Dataset Train Dev Test Wikipedia 2.9M 294K 32K 203,621 51,362 46,435 CoNLL2003 Common Crawl 24M 247K 494K 23,499 5,942 5,648 1,088,503 147,724 152,728 Ontonotes Table 1: Number of tokens in the train/dev/test splits for 81,829 11,066 11,257 Wikipedia (Wiki) and Common Crawl (CC). 62,730 15,733 23,394 WNUT17 1,975 836 1,079 Example sentences “ the investigation is still ongoing , ” McMahon said .

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us