A Neural Rule Grounding Framework for Label-Efficient Relation Extraction

A Neural Rule Grounding Framework for Label-Efficient Relation Extraction

NERO: A Neural Rule Grounding Framework for Label-Efficient Relation Extraction Wenxuan Zhou1, Hongtao Lin1, Bill Yuchen Lin1, Ziqi Wang2 Junyi Du1, Leonardo Neves3, Xiang Ren1 1University of Southern California 2Tsinghua University 3Snapchat Inc. 1{zhouwenx, lin498, yuchen.lin, junyidu, xiangren}@usc.edu [email protected] [email protected] ABSTRACT Rules Deep neural models for relation extraction tend to be less reliable p1 (SUBJ-PER, ‘s child, OBJ-PER) → PER:CHILDREN when perfectly labeled data is limited, despite their success in label- p2 (SUBJ-PER, is known as, OBJ-PER) → PER:ALTERNATIVE NAMES p sufficient scenarios. Instead of seeking more instance-level labels 3 (SUBJ-ORG, was founded by, OBJ-PER) → ORG:FOUNDED_BY from human annotators, here we propose to annotate frequent Hard Matching surface patterns to form labeling rules. These rules can be auto- Corpus Matching Score matically mined from large text corpora and generalized via a soft s1 Microsoft was founded by Bill Gates in 1975. 1.0 rule matching mechanism. Prior works use labeling rules in an s2 Microsoft was created by Bill Gates in 1975. 0.9 exact matching fashion, which inherently limits the coverage of s3 In 1975, Bill Gates launched Microsoft. 0.8 sentence matching and results in the low-recall issue. In this paper, we present a neural approach to ground rules for RE, named Nero, Figure 1: Current rule-based methods mostly rely on ex- which jointly learns a relation extraction module and a soft match- act/hard matching to raw corpus and suffer from limited ing module. One can employ any neural relation extraction models coverage. For example, the rule body of p3 only matches sen- as the instantiation for the RE module. The soft matching module tence s1 but is also similar to s2 and s3, which express the learns to match rules with semantically similar sentences such that same relation as p3. A “soft rule matching" mechanism is de- raw corpora can be automatically labeled and leveraged by the RE sirable to make better use of the corpus for label generation. module (in a much better coverage) as augmented supervision, in addition to the exactly matched sentences. Extensive experiments is supposed to predict the relation of ORG:FOUNDED_BY. Recent ad- and analysis on two public and widely-used datasets demonstrate vance in neural language processing has shown that neural models the effectiveness of the proposed Nero framework, comparing with [36–38] gained great success on this task, yielding state-of-the-art both rule-based and semi-supervised methods. Through user stud- performance when a large amount of well-annotated sentences are ies, we find that the time efficiency for a human to annotate rules available. However, these supervised learning methods degrade and sentences are similar (0.30 vs. 0.35 min per label). In particular, dramatically when the sentence-level labels are insufficient. This Nero’s performance using 270 rules is comparable to the models problem is partially solved by the use of distant supervision based trained using 3,000 labeled sentences, yielding a 9.5x speedup. More- on knowledge bases (KBs) [20, 30]. They usually utilize an existing over, Nero can predict for unseen relations at test time and provide KB for automatically annotating an unlabeled corpus with an over- interpretable predictions. We release our code1 to the community simplified assumption — two entities co-occurring in a sentence for future research. should be labeled as their relations in the KB regardless of their con- ACM Reference Format: texts. While distant supervision automates the labeling process, it Wenxuan Zhou, Hongtao Lin, Bill Yuchen Lin, Ziqi Wang, Junyi Du, Leonardo also introduces noisy labels due to context-agnostic labeling, which Neves, and Xiang Ren. 2020. NERO: A Neural Rule Grounding Framework for can hardly be eliminated. Label-Efficient Relation Extraction. In WWW ’20, April 20 - 24, 2020, Taipei. In addition to KBs, labeling rules are also important means of ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/330ccc6307. representing domain knowledge [25], which can be automatically arXiv:1909.02177v4 [cs.CL] 15 Jan 2020 3328180 mined from large corpora [14, 21]. Labeling rules can be seen as a set of pattern-based heuristics for matching a sentence to a relation. 1 INTRODUCTION These rules are much more accurate than using KBs as distant Relation extraction (RE) plays a key role in information extraction supervision. The traditional way of using such rules is to perform tasks and knowledge base construction, which aims to identify exact string matching, and a sentence is either able or unable to be the relation between two entities in a given sentence. For exam- matched by a rule. This kind of hard-matching method inherently ple, given the sentence “Bill Gates is the founder of Microsoft” limits the generalization of the rules for sentences with similar and an entity pair (“Bill Gates”, “Microsoft”), a relation classifier semantics but dissimilar words, which consequently causes the low- 1https://github.com/INK-USC/NERO recall problem and data-insufficiency for training neural models. For example, in Fig. 1, the rule P3 can only find S1 as one hard- matched instance for training the model while other sentences (e.g., The Web Conference ’20, April 20 - 24, 2020, Taipei S2 and S3) should also be matched for they have a similar meaning. © 2020 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-6317-4/19/07. Many attempts have been made to RE using labeling rules and https://doi.org/10.1145/330ccc6307.3328180 unlabeled corpora, as summarized in Fig. 2. Rule-based bootstrapping The Web Conference ’20, April 20 - 24, 2020, Taipei W. Zhou et al. Rules Rules Model embeddings), which can be learned by a contrastive loss. Jointly training the two modules reinforce the quality of pseudo-labels Extracting Finding Hard-Matching Annotated Rules Matches Denoising and then improve the performance of the relation classification Sentences module. Specifically, in a batch-mode training process, we usethe Corpus Corpus Unannotated up-to-date learned soft matcher to assign all unmatched sentences Sentences with weighted labels, which further serve as an auxiliary learning (A) Bootstrapping (B) Data Programming objective for the relation classifier module. Extensive experiments on two public datasets, TACRED [38] and Neural Rule Grounding Semeval [11], demonstrate that the proposed Nero consistently Soft Rule Rules Model Rules Matcher outperforms baseline methods using the same resources by a large margin. To investigate the label efficiency of Nero, we further Hard-Matching Hard-Matching Model Annotated Annotated conduct a user study to compare models trained with rules and Sentences Sentences labeled sentences, respectively, both created under the same time Corpus Corpus Unannotated Unannotated constraint. Results show that to achieve similar performance, Nero Sentences Sentences requires about ten times fewer human efforts in annotation time !"# S elf -Training (D) NERO than traditional methods of gathering labels. The contributions of our work are summarized as follows: (1) Figure 2: Comparison between previous work and the pro- Our proposed method, Nero, is among the first methods that can posed Nero framework. (A) Bootstrapping. (B) Data Pro- learn to generalize labeling rules for training an effective relation ex- gramming. (C) Self-Training. (D) Nero. The neural rule traction model, without relying on additional sentence-level labeled grounding process enables the soft-matching between rules data; (2) We propose a learnable rule matcher that can semantically and unmatched sentences using the soft rule matcher. ground rules for sentences with similar meaning that are not able to be hard-matched. This method also provides interpretable pre- methods [2, 4, 15] extract relation instances from the raw corpus dictions and supports few-shot learning ability on unseen relations; by a pre-defined rule mining function (e.g., TF-IDF, CBOW) and (3) We conduct extensive experiments to demonstrate the effective- expanding the rule set in an iterative way to increase the coverage. ness of our framework in a low-resource setting and show careful However, they still make predictions by performing hard-matching investigations on the efficiency of using human efforts. on rules, which is context-agnostic and suffers from the low-recall problem. Also, their matching function is not learnable and thus 2 PROBLEM FORMULATION has trouble in extracting semantically similar sentences. Another We first introduce basic concepts and their notations and then typical option is data programming [9, 25], which aims to anno- present the problem definition as well as the scope of the work. tate the corpus using rules with model fitting. It trains a neural RE model using the hard-matched sentences by rules and then re- Relation Extraction. Given a pair of entity strings (esubj, eobj), subject object duces the noise of rules with their proposed algorithms. However, identified as the and entities respectively, the task 2 R [ it does not consider the massive data that fail to be annotated by of relation extraction (RE) is to predict (classify) the relation r fNoneg between them, where R is a pre-defined set of relations of hard-matching. Self-training [26], as a semi-supervised framework, does attempt to utilize unmatched sentences by using confident interest. Specifically, here we focus on sentence-level RE [11, 18, 38], predictions of a learned model. Then, they train the model again which aims to predict the relation of entity mention pairs in a 2 R and again by iteratively generating more confident predictions. sentence s—i.e., identifying the label r for an instance in the ¹ ; º2 ¹ ; ; º However, it does not explicitly model the soft matching ability of form of esubj eobj;s and yielding the prediction esubj r eobj;s .

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    11 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us