
A Multi-level Neural Network for Implicit Causality Detection in Web Texts Shining Lianga,b, Wanli Zuoa,b, Zhenkun Shib,c, Sen Wangd, Junhu Wange, Xianglin Zuoa,b aKey Laboratory of Symbol Computation and Knowledge Engineering of Ministry of Education, Changchun, China bCollege of Computer Science and Technology, Jilin University, Jilin, China cTianjin Institute of Industrial Biotechnology, Chinese Academy of Sciences dSchool of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, Australia eSchool of Information and Communication Technology, Griffith University, Queensland, Australia Abstract Mining causality from text is a complex and crucial natural language un- derstanding task corresponding to the human cognition. Existing studies at its solution can be grouped into two primary categories: feature engineer- ing based and neural model based methods. In this paper, we find that the former has incomplete coverage and inherent errors but provide prior knowl- edge; while the latter leverages context information but causal inference of which is insufficiency. To handle the limitations, we propose a novel causality detection model named MCDN to explicitly model causal reasoning process, and furthermore, to exploit the advantages of both methods. Specifically, we adopt multi-head self-attention to acquire semantic feature at word level and develop the SCRN to infer causality at segment level. To the best of our knowledge, with regards to the causality tasks, this is the first time that arXiv:1908.07822v3 [cs.CL] 27 May 2021 the Relation Network is applied. The experimental results show that: i) the proposed approach performs prominent performance on causality detection; ii) further analysis manifests the effectiveness and robustness of MCDN. Keywords: Causality Detection, Multi-level Neural Network, Relation Network, Transformer Preprint submitted to Neurocomputing May 28, 2021 1. Introduction Automatic text causality mining is a critical but difficult task because causality is thought to play an essential role in human cognition when making decisions [1]. Thus, automatic text causality has been studied extensively in a wide range of areas, such as medical [2], question answering [3] and event prediction [4], etc. A tool automatically extracts meaningful causal relations could help us construct causality graphs [5] to unveil previously unknown relationships between events and accelerate the discovery of the intrinsic logic of the events [6]. Many research efforts have been made to mine causality from text corpus with complex sentence structures in books or newspapers [7, 8, 9]. However, the scale of textual data in the world, e.g., on the web, is much larger than that in books and newspapers. Despite the success of existing studies on extracting explicit causality, there are two reasons why most cannot be di- rectly applied to causality mining on the web text where a large number of implicit causality cases exist. First, most public available causality mining datasets are collected from books and newspapers. The language expressions of them are usually formal but lack diversity than the web text. Second, the existing works mainly focus on explicit causal relations expressed by intra- sentence or inter-sentence connectives, without considering ambiguous and implicit cases. As is well known that implicit causality always has a simple sentence structure without any connectives as below. In Example 1, “got wet” is the cause of “fever” and there are no connectives available for infer- ence. Contrastively in Example 2, there are explicit connectives (i.e. “since” and “result”) benefiting the causality detection. • Example 1: I got wet during the day and came home with a fever at night. • Example 2: Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program’s design. Consequently, it would make the perception of causality incomplete if we ignore those implicit ones. There is a huge demand to investigate an approach for mining both explicit and implicit causality from the web text. In this paper, we formulate causality mining as two sub-sequential tasks: causality detection [4, 10, 11] and cause-effect pair extraction [12, 13], which is 2 Table 1: Examples of ambiguous AltLexes in the parallel data. Label English Wikipedia Simple Wikipedia Causal A moving observer thus sees the A moving observer thus sees the light coming from a slightly dif- light coming from a slightly dif- ferent direction and consequently ferent direction and consequently sees the source at a position sees the source at a position shifted from its original position. shifted from its original position. Non-causal His studies were interrupted by However, he had to stop study- World War I, and consequently ing because of the World War I, taught at schools in Voronezh and instead taught at schools in and Saratov. Voronezh and Saratov. also investigated in SemEval-2020 Task5 [14] and FinCausal 2020 [15]. When dealing with large-scale web text, detecting causalities is a foundation system before extracting cause-effect pairs. It could help build a high quality corpus with diverse linguistic characteristics for causal pairs extraction, which leads to lower annotation cost and less model complexity in downstream tasks. In recent years, Hidey and McKeown [10] utilize the “AltLex” (Alternative lex- icalization) to build a large open-domain causality detection dataset based on parallel Wikipedia articles especially for ambiguous and implicit cases as shown in Table 1. Most existing works on the causality detection task fall into two categories: i) feature engineering based methods, widely apply lin- guistic features (part-of-speech (POS) tagging, dependency parsing) [4, 10] and statistic features (template, co-occurrence) [11]. There exists thousands of AltLexes among which may appear in both causal and non-causal cases as “consequently” in Table 1 in fact. However, the complicated features hardly capture the subtle discrepancies of various causality expressions and the in- herent errors of natural language processing (NLP) tools will be accumulated and propagated; ii) neural model based methods, which have achieved promi- nent results with the end-to-end paradigm, is prevalent for design and usage. We conduct empirical study to asses the application of neural text classi- fiers on causality detection. The performance of most neural model based methods fall behind feature engineering based methods (in Table 3). The reason is encapsulated as they mainly focus on the interactions of the words 3 and treat the sentence as a whole but lack explicit inference for causality within sentences. Recently, pre-trained language models (PLMs) [16, 17] de- velop dramatically and even exceed human performance on many NLP tasks. Nevertheless, when it comes to large-scale web text data, the memory and time consumption is very considerable. BL L AL he has subsequently written a further nine plays BL-L L-AL B-A A-B cause/effect-connective cause-effect interaction interaction Figure 1: An example for different segments within an sentence where “subsequently” is the AltLex word. Faced with the above problems, we propose the Multi-level Causality De- tection Network (MCDN) for causality detection in web texts based on the following observations: i) neural network-based methods can reduce the labor cost and inherent errors of feature engineering based methods, whereas com- bining the prior knowledge of the latter benefits the former [18]; ii) causality reasoning is a high-level ability of human [1], which calls for multi-level anal- ysis of the model. MCDN modifies a Transformer Encoder module to obtain semantic representation at word level and integrates a novel Self Causal Re- lation Network (SCRN) module at segment level inferring causality via the segments on both sides of the connectives. Moreover, we argue that integrat- ing multi-level knowledge could facilitate the token level feature overfitting proposed in [14]. Specifically, MCDN splits the sentence into three segments on the ground of “segment before AltLex” (BL), “AltLex” (L) and “segment after AltLex” (AL) as shown in Figure 1. Intuitively, the cause and effect part usually exist on both sides on the AltLex. This simple prior feature minimizes the impact of feature engineering complexity and errors. Motivated by explic- itly modeling the causal reasoning process, the SCRN module encodes the segments and aggregates them into pair-wise groups that are concatenated 4 with a sentence representation in the meantime. The interaction between the cause/effect part with the connective, BL-L and L-AL, describes if each segment conveys causation depending on the current AltLex. And by the interaction among causal segments, B-A and A-B, SCRN directly infers the potential causality when they’re coupled in the context. Above information comprises the segment-level representation. Next, we utilize Transformer architecture [19] at word level. To keep the framework fast and light un- der the large-scale web text scenario, the heads and blocks of Transformer Encoder module are clipped consequently don’t use the pre-trained weights. Furthermore, we extend the segment embedding to adopt multi-segments in the input. With this end-to-end module, MCDN combines local context and long-distance dependency to obtain a word-level representation. Finally, we perform detection with word-level and segment-level representations. In general, the contributions
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages31 Page
-
File Size-