Survey of Textual Entailment Approaches, Rakesh

Survey of Textual Entailment Approaches, Rakesh

Survey of Textual Entailment Approaches Rakesh Khobragade, Heaven Patel and Pushpak Bhattacharyya Indian Institute of Technology, Bombay frkhobrag, heaven, [email protected] Abstract competition, two text fragments are given and the task is to determine if one can be inferred from Natural Language Inference, or Textual En- the other. There was a total of 7 RTE chal- tailment, has applications in many NLP tasks lenges from 2005-2011. RTE 1-3 focused on 2- such as question-answering, text summariza- tion, paraphrase detection, and machine trans- way classification including only Entailment and lation evaluation. In this survey, we perform Non-entailment classes. From RTE 4, the neu- an analysis of developments in the NLI tasks. tral class was also introduced making RTE as a We discuss various approaches that have been 3-way classification task. This addition of neutral proposed and datasets available to train deep class hardened the RTE task but also made it pos- learning models. We also describe a few ap- sible for systems to be able to distinguish between plications of NLI. contradictory and unknown premise and hypoth- esis pairs. RTE challenges also included Summa- 1 Introduction rization and novelty detection tasks apart from En- Textual Entailment, or Natural Language Infer- tailment task to increase the difficulty and make ence, is considered as one of the major tasks in challenges more realistic. RTE challenges have re- Natural Language Processing(NLP) that requires sulted in the development of benchmarks for NLI deep semantic understanding. For a pair of sen- task and have uncovered the challenges faced in tences, textual entailment can be defined as the recognizing textual entailment. task of finding if one of them can be inferred from the other. Consider Example 1 where we can in- 3 Classical Methods fer hypothesis “John knows how to drive” from the premise “John cleared driving test”. Entail- The basic solution to Recognizing Entailment is ment task needs a semantic understanding, mod- to compare the re-represented version of premise els trained on the entailment data can be applied to and hypothesis and determine if the hypothesis can many other NLP tasks such as text summarization, be derived from the premise based on the com- paraphrase detection, and machine translation. We parison. We can re-represent the sentences us- can note that the inference relation in Entailment ing either lexical or syntactical methods. The lex- is a unidirectional relation only from premise to ical method works directly on the input surface hypothesis. If we also have Entailment from hy- strings by operating solely on a string comparison pothesis to premise, we can say that both sentences between the text and the hypothesis. Lexical ap- convey the meaning. proach ignores the semantic relationship between premise and hypothesis and determines the entail- Example 1 John took driving test today and ment based on only lexical concepts. Most of the cleared it =) John knows how to drive common approaches in lexical methods are word 2 RTE Challenges overlapping, subsequence matching, etc. Syntax- based or syntactic approaches convert premise and RTE(Recognizing Textual Entailment) challenge, hypothesis into directed graphs. These graphs can started in 2005 (Dagan et al., 2005), is an an- be created using a parse tree and then compared nual competition where researchers from the NLP with each other. Entailment or Non-Entailment community showcase entailment models. In this is determined using the comparison between the graphs of premise and hypothesis. Entailment can SNLI corpus as it tries to add more diversity in be recognized using different rules and tasks. In the types of sentences. MNLI is a crowd-sourced Example 2, we need to know following things to collection of 433k sentence pairs annotated identify entailment; “Cisco Systems Inc.” is same with textual entailment information. It contains as “Cisco” (Entity Matching), “Filed a lawsuit” sentence pairs from 10 different genres, out of is same as “accused” (Paraphrasing), Eliminate which only 5 appears in the training set while all “last year” from the text (Alignment task), “Cisco 10 appear in the test set. This makes the learning accused Apple” is different from “Apple accused NLI more challenging and generic. Cisco” (Semantic Role Labeling). Example 2 “Cisco Systems Inc. filed a lawsuit 4.3 XNLI against Apple for a patent violation last year.” Cross-lingual NLI(XNLI) (Conneau et al., 2018) =) “Cisco accused Apple for patent violation.” corpus is created intended to encourage research in 4 Datasets cross-linguality. It is derived from the MNLI cor- pus for 15 languages using crowd-sourcing. Dev RTE challenges provided initial datasets to test and test sets of MNLI are manually translated to and train NLI models. But these were very limited 15 languages: English, French, Spanish, German, in quantity, comprising of only a few thousand of Greek, Bulgarian, Russian, Turkish, Arabic, Viet- sentence pairs. As neural networks architectures namese, Thai, Chinese, Hindi, Swahili, and Urdu. became more powerful, attempts were made to Out of these Urdu and Swahili are low-resource create a large corpus that can be used for training languages. of deep neural networks. The accuracy achieved on these datasets is considered as the benchmark 4.4 SciTail to compare different NLI models. In this section, Although SNLI has proved to be beneficial for we look at some of the datasets that enabled mod- the advancement of techniques for recognizing els to learn to predict correctly by looking at many textual entailment and has provided researchers examples. to run many deep neural models on but it has been observed the dataset is not very effective 4.1 SNLI for training a model for a particular end task like Stanford Natural Language Inference(SNLI) question-answering. SCITAIL (Khot et al., 2018) (Bowman et al., 2015) corpus is the first one to is created with end-task of question-answering in have presented the research community a platform view. to test their neural network models on. SNLI is created using crowdsourcing, using Amazon Mechanical Turks platform. In this process, human volunteers were presented with captions of images and were asked to construct three sentences, one in favor of subject of the image, one in contradiction to it and one unrelated to the subject of the image. This created the hypothesis belonging to three classes entailment, contradic- tion and neutral. The premise was constructed by another crowdsourced task in which volunteers were asked to caption images from Flickr40k (Young et al., 2014). It has about 30k images and 160k of total captions were acquired. Overall there are total 550,152 training pairs, and 10k each in development and test set. 4.2 MNLI Figure 1: An example from Scitail dataset Multi-genre Natural Language Inference(MNLI) (Williams et al., 2018) is an improvement over the The premise-hypothesis pairs are created from a corpus of high school level science related mul- similarity task. It includes around 400,000 ques- tiple choice questions. The hypothesis is created tions duplicate examples. Each line has ID for the by combining the question and the correct answer. Question pair, individual ids for question pair, two The premise is obtained independently from a cor- separate questions, and label of whether they are pus of web text such that it entails the hypothesis. duplicate or not. Similarly, premises are created for contradiction Training data includes 255027 non-duplicates and neutral pairs. Since the premise and hypoth- and 149263 duplicates whereas Testing data on the esis are created independently they are very dif- Kaggle platform includes 2345796 question pairs. ferent from each other in terms of the syntactic and lexical structure. This makes the task of en- 5 Approaches tailment more challenging since there can be sen- tences that do not have a huge overlap of words but A lot about the meaning of a sentence can be are similar in meaning and at the same time there inferred from its lexical and syntactic composi- can sentences with many words overlapping but tion. Features like the presence of negation or are not related. The annotation of such premise- synonyms can be used to compare sentences. hypothesis pair as supports (entails) or not (neu- Dependency graph can also be used to understand tral), is crowdsourced in order to create the Sci- how different entities interact with each other. Tail dataset. The dataset contains 27,026 exam- But because of the variability and ambiguity of ples with 10,101 examples with entails label and natural language, semantics cannot be ignored. In 16,925 examples with a neutral label. this section, we first look at an approach using machine learning and then move on to more 4.5 MRPC recent neural network models. MRPC paraphrase corpus (Dolan and Brockett, With the advent of SNLI and MNLI corpus, it 2005) consists of 5800 paraphrase sentence pairs became possible to run deep learning models for with a label(Paraphrase or Non-paraphrase). This NLI. Large data which comprises of many exam- corpus is collected from the web news sources in ples of differing composition allows discovery of which 67% of the examples in the corpus are pos- features that would otherwise be difficult to iden- itive whereas only 33% examples are negative. In tify. Prior to deep learning, most of the approaches MRPC dataset, around 4100 pairs are training ex- made use of hand-crafted features, mostly distance amples and 1700 pairs are test examples. metrics, to train the NLI models. This lead to a very restricted set of features that could be used 4.6 Glockner corpus while training. Also, since different languages Glockner dataset (Glockner et al., 2018) was have a different composition of sentences, manu- created using SNLI dataset for the sole purpose ally extracting various language features is not fea- of measuring the lexical similarity of a model.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us