Story Fragment Stitching: The Case of the Story of Moses Mohammed Aldawsari1 , Ehsaneddin Asgari2 , Mark A. Finlayson3 1,3Florida International University 2University of California, Berkeley [email protected], malda021,markaf @fiu.edu { } Abstract the societies in which we live. One interesting and chal- lenging task which has not yet been solved is what we call We introduce the task of story fragment stitching, here story fragment stitching. In this task we seek to merge which is the process of automatically aligning and partial tellings of a story—where each partial telling con- merging event sequences of partial tellings of a tains part of the sequence of events of a story, perhaps from story (i.e., story fragments). We assume that each different points of view, and may be found across different fragment contains at least one event from the story sources or media—into one coherent narrative which may of interest, and that every fragment shares at least then be used as the basis for further processing. Conceptu- one event with another fragment. We propose a ally, this task is similar to both cross-document event coref- graph-based unsupervised approach to solving this erence (CDEC) and event ordering in NLP. However, story problem in which events mentions are represented fragment stitching, as we define it, presents a more challeng- as nodes in the graph, and the graph is compressed ing problem for at least two reasons. First, and unlike event using a variant of model merging to combine nodes. coreference, the overall timeline of the story’s events need to The goal is for each node in the final graph to con- be preserved across all fragments. Second, and unlike event tain only coreferent event mentions. To find coref- ordering which targets only events related to a single entity, erent events, we use BERT contextualized embed- this work considers all events across all fragments. ding in conjunction with a tf-idf vector representa- For the purposes of this work, we define a story as a se- tion. Constraints on the merge compression pre- quence of events effected by characters and presented in a serve the overall timeline of the story, and the final discourse. This is in accord with fairly standard definitions: graph represents the full story timeline. We evalu- for example, [Forster, 1927] said that “A story is a narrative of ate our approach using a new annotated corpus of events arranged in their time sequence.” As a simplifying as- the partial tellings of the story of Moses found in sumption, we additionally assume that the events in the story the Quran, which we release for public use. Our are presented in the chronological order in which the events approach achieves a performance of 0.63 F1 score. of a story take place (i.e., the fabula time order) [Bordwell, 2007]. We leave the problem of extracting the chronological ordering of events within a text for other work. 1 Introduction We present an approach to story fragment stitching prob- Understanding stories is a long-held goal of both artificial in- lem inspired by [Finlayson, 2016] which in turn based on telligence and natural language processing [Charniak, 1972; model merging, a regular grammar learning algorithm [Stol- Schank and Abelson, 1977; Wilensky, 1978; Dyer, 1983; cke and Omohundro, 1993], using similarity measures based Riloff, 1999; Frank et al., 2003; Mueller, 2007; Winston, on BERT contextualized embedding and tf-idf weights of 2014]. Stories are found throughout our daily lives, e.g., in events and their arguments. We apply this approach to a news, entertainment, education, religion, and many other do- concrete example of this problem, namely, the story of the mains. Automatically understanding stories implicates many prophet Moses as found in the Quran, the Islamic holy book. interesting natural language processing tasks, and much in- The story of Moses is not found in one single telling in the formation can be extracted from stories, including concrete Quran; rather, it is found in eight fragments spread across six facts about specific events, people, and things, commonsense different chapters (the chapters of the Quran are called suras), knowledge about the world, and cultural knowledge about with the story comprising 7,931 total words across 283 verses of anywhere from 2 to 94 words in length. In this work we Copyright c 2020 by the paper’s authors. Use permitted under Cre- demonstrated our approach using the seven fragments with ative Commons License Attribution 4.0 International (CC BY 4.0). coherent timelines. In: A. Jorge, R. Campos, A. Jatowt, A. Aizawa (eds.): Proceed- The story of Moses is especially useful for this work be- ings of the first AI4Narratives Workshop, Yokohama, Japan, January cause it has been subject to detailed event analysis, in par- 2021, published at http://ceur-ws.org ticular, [Ghanbari and Ghanbari, 2008] identified a canoni- cal timeline of events for the story. Further, the Quran verse 3 Approach structure provides a natural unit of analysis, where nearly ev- We now discuss the precise definition of the story fragment ery verse is related to only a single event in the story timeline. stitching task ( 3.1) and the details of the two main com- We manually extracted 573 event mentions from 273 verses ponents of our§ approach: model formulation ( 3.2), and the (omitting 11, as described later) and annotated all events cor- graph merge to align fragments events into a§ full, ordered, responding to the Ghanbari’s event categories. We used this end-to-end list of story events ( 3.3). data to test our approach, resulting in a proof of concept of § story fragment stitching. We release both our code and data 3.1 Task to enable reimplementations1. We define the goal of story fragment stitching as: align a set We begin by discussing prior work on cross-document of story fragments into a full, ordered, end-to-end list of story event coreference, event ordering, as well as the description events. We assume that the story fragments are ordered lists and analysis of story structure ( 2). Then we introduce our of events, where the order is that of the fabula, namely the method, including the task definition§ ( 3.1) and specific as- order of events as they happen in the story world. In many pects of our approach ( 3.2– 3.3). We then§ describe our eval- stories, the fabula order is different from the discourse order, uation, including construction§ § of the gold standard for the but we do not consider this case here; we leave the problem Moses story in the Quran ( 4.1), the experiment setup ( 4.2), of extracting the chronological order of events to other work. the result of our model ( §4.5), as well as an error analysis§ We also assume that each fragment shares at least one event ( 5). We conclude with a list§ of contributions ( 6). § § with another fragment. The output of the system is an or- dered list of nodes, where each node is a collection of event 2 Related Work mentions (corefering events) that all describe one particular single event, and these nodes are in the same order as the The most closely related problems to story stitching are the overall fabula. problem of cross-document event coreference (CDEC) and cross-document event ordering. In CDEC systems, the goal is 3.2 Model Formulation to group expressions that refer to the same event across mul- The first step of the approach is model initialization which tiple documents. [Bagga and Baldwin, 1999; Lee et al., 2012; is shown Algorithm 1 lines 1–3. Using the function Goyal et al., 2013; Saquete and Navarro-Colorado, 2017; constructLinearBranch, we convert each fragment’s Kenyon-Dean et al., 2018; Barhom et al., 2019]. In event list of events into a linear directed graph (linear branch) ordering task, which was introduced in SemEval-2015 [Mi- where each node contains only a single event. Each event nard et al., 2015], the goal is to order events cross-document is represented by a vector which is a concatenation of the in which a specific target entity is involved. That is, a sys- event contextualized embedding from the BERT model and tem should produce a timeline for a specific target entity tf-idf weights of the event lemma and its semantic arguments. and that timeline consists of the ordered list of the events BERT [Devlin et al., 2018] is a multi-layer bidirectional trans- in which that entity participates. Similarly, within document former trained on plain text for masked word prediction and event sequence detection task, which was introduced in TAC next sentence prediction tasks, while tf-idf is the standard KBP 2017 event track [Mitamura et al., 2017], aims to iden- term weighting approach to reflect how important a word is tify event sequence (i.e., after links) that occurs in a script in a document in comparison to the rest of documents [Salton [Schank and Abelson, 1977]. and McGill, 1986]. Using the function linkGraphs we Despite this very interesting and useful prior work, these link all linear branches to a start and an end node, resulting systems are not directly applicable to the task of story frag- in one directed graph of all the set of fragments, as shown in ment stitching as we define it. In particular, CDEC systems Figure 1. ignore the timeline of the story’s events (i.e., the overall time- This initial model will be used to generate possible solu- line of the story’s events is not guaranteed to be preserved tions by merging different nodes on the basis of a similarity across all fragments), while event ordering systems only or- measure, discussed below.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-