<<

Edinburgh Research Explorer

Screenplay Summarization Using Latent Structure Citation for published version: Papalampidi, P, Keller, F, Frermann, L & Lapata, M 2020, Summarization Using Latent . in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, pp. 1920-1933, 2020 Annual Conference of the Association for Computational Linguistics, Virtual conference, United States, 5/07/20.

Link: Link to publication record in Edinburgh Research Explorer

Document Version: Publisher's PDF, also known as Version of record

Published In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

General rights Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights.

Take down policy The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact [email protected] providing details, and we will remove access to the work immediately and investigate your claim.

Download date: 24. Sep. 2021 Screenplay Summarization Using Latent Narrative Structure

Pinelopi Papalampidi1 Frank Keller1 Lea Frermann2 Mirella Lapata1 1Institute for Language, Cognition and Computation School of Informatics, University of Edinburgh 2School of Computing and Information Systems University of Melbourne [email protected], [email protected], [email protected], [email protected]

Victim: Mike Kimble, found in a Body Farm. Died 6 Abstract Setup hours ago, unknown cause of death. CSI discover cow tissue in Mike's body. Opportunity Most general-purpose extractive summariza- Cross-contamination is suggested. Probable cause of death: Mike's house has been set on tion models are trained on news articles, which New fire. CSI finds blood: Mike was murdered, fire was Situation are short and present all important information a cover up. First suspects: Mike's fiance, Jane upfront. As a result, such models are biased by and her ex-husband, Russ. CSI finds photos in Mike's house of Jane's Change of position and often perform a smart selection daughter, Jodie, posing naked. Plans Mike is now a suspect of abusing Jodie. Russ of sentences from the beginning of the doc- Progress allows CSI to examine his gun. ument. When summarizing long , CSI discovers that the bullet that killed Mike Point of was made of frozen beef that melt inside him. which have complex structure and present in- no Return formation piecemeal, simple position heuris- They also find beef in Russ' gun. Russ confesses that he knew that Mike was Complications tics are not sufficient. In this paper, we pro- abusing Jody, so he confronted and killed him. pose to explicitly incorporate the underlying Russ is given bail, since no jury would convict Major a protective father. Setback structure of narratives into general unsuper- CSI discovers that the naked photos were taken The final push vised and supervised extractive summarization on a boat, which belongs to Russ. CSI discovers that it was Russ who was models. We formalize narrative structure in abusing his daughter based on fluids found in terms of key narrative events (turning points) his sleeping bag and later killed Mike who tried to help Jodie. and treat it as latent in order to summarize Russ receives a mandatory life sentence. Aftermath screenplays (i.e., extract an optimal sequence of scenes). Experimental results on the CSI Figure 1: Example of narrative structure for episode corpus of TV screenplays, which we augment “Burden of Proof” from TV series Crime Scene Inves- with scene-level summarization labels, show tigation (CSI); turning points are highlighted in color. that latent turning points correlate with im- portant aspects of a CSI episode and improve summarization performance over general ex- ements of a story in the beginning and support- tractive algorithms, leading to more complete ing material and secondary details afterwards. The and diverse summaries. rigid structure of news articles is expedient since important passages can be identified in predictable 1 Introduction locations (e.g., by performing a “smart selection” Automatic summarization has enjoyed renewed of sentences from the beginning of the document) interest in recent years thanks to the popular- and the structure itself can be explicitly taken into ity of modern neural network-based approaches account in model design (e.g., by encoding the rel- (Cheng and Lapata, 2016; Nallapati et al., 2016, ative and absolute position of each sentence). 2017; Zheng and Lapata, 2019) and the avail- In this paper we are interested in summarizing ability of large-scale datasets containing hundreds longer narratives, i.e., screenplays, whose form of thousands of document–summary pairs (Sand- and structure is far removed from newspaper ar- haus, 2008; Hermann et al., 2015; Grusky et al., ticles. Screenplays are typically between 110 and 2018; Narayan et al., 2018; Fabbri et al., 2019; Liu 120 pages long (20k words), their content is bro- and Lapata, 2019). Most efforts to date have con- ken down into scenes, which contain mostly dia- centrated on the summarization of news articles logue (lines the actors speak) as well as descrip- which tend to be relatively short and formulaic tions explaining what the camera sees. Moreover, following an “inverted pyramid” structure which screenplays are characterized by an underlying places the most essential, and interesting el- narrative structure, a sequence of events by which

1920 Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1920–1933 July 5 - 10, 2020. c 2020 Association for Computational Linguistics Screenplay Latent Narrative Structure ermann et al., 2018) which revolves around a team TP1: Introduction of forensic investigators solving criminal cases.

TP2: Goal definition Such programs have a complex but well-defined structure: they open with a crime, the crime scene TP3: Commitment is examined, the victim is identified, suspects are TP4: Setback introduced, forensic clues are gathered, suspects

TP5: Ending are investigated, and finally the case is solved. In this work, we adapt general-purpose extrac- tive summarization algorithms (Nallapati et al., Summary scenes 2017; Zheng and Lapata, 2019) to identify infor- Video summary relevant mative scenes in screenplays and instill in them to TP2 knowledge about narrative film structure (Hauge, 2017; Cutting, 2016; Freytag, 1896). Specifically, we adopt a scheme commonly used by screen- irrelevant as a practical guide for producing success- ful screenplays. According to this scheme, well- relevant structured stories consist of six basic stages which to TP5 are defined by five turning points (TPs), i.e., events which change the direction of the narrative, and Figure 2: We first identify scenes that as turning determine the story’s progression and basic the- points (i.e., key events that segment the story into sec- matic units. In Figure1, TPs are highlighted for tions). We next create a summary by selecting informa- a CSI episode. Although the link between turning tive scenes, i.e.,semantically related to turning points. points and summarization has not been previously made, earlier work has emphasized the importance a story is defined (Cutting, 2016), and by the of narrative structure for summarizing books (Mi- story’s characters and their roles (Propp, 1968). halcea and Ceylan, 2007) and social media content Contrary to news articles, the gist of the story in a (Kim and Monroy-Hernandez´ , 2015). More re- screenplay is not disclosed at the start, information cently, Papalampidi et al.(2019) have shown how is often revealed piecemeal; characters evolve and to identify turning points in feature-length screen- their actions might seem more or less important plays by projecting synopsis-level annotations. over the course of the narrative. From a modeling Crucially, our method does not involve man- perspective, obtaining training data is particularly ually annotating turning points in CSI episodes. problematic: even if one could assemble screen- Instead, we approximate narrative structure au- plays and corresponding summaries (e.g., by min- tomatically by pretraining on the annotations of ing IMDb or Wikipedia), the size of such a corpus the TRIPOD dataset of Papalampidi et al.(2019) would be at best in the range of a few hundred and employing a variant of their model. We find examples not hundreds of thousands. Also note that narrative structure representations learned on that genre differences might render transfer learn- their dataset (which was created for feature-length ing (Pan and Yang, 2010) difficult, e.g., a model films), transfer well across cinematic genres and trained on movie screenplays might not generalize computational tasks. We propose a framework for to sitcoms or soap operas. end-to-end training in which narrative structure is Given the above challenges, we introduce a treated as a latent variable for summarization. We number of assumptions to make the task feasible. extend the CSI dataset (Frermann et al., 2018) with Firstly, our goal is to produce informative sum- binary labels indicating whether a scene should be maries, which serve as a surrogate to reading the included in the summary and present experiments full or watching the entire film. Secondly, with both supervised and unsupervised summa- we follow Gorinski and Lapata(2015) in con- rization models. An overview of our approach is ceptualizing screenplay summarization as the task shown in Figure2. of identifying a sequence of informative scenes. Our contributions can be summarized as fol- Thirdly, we focus on summarizing television pro- lows: (a) we develop methods for instilling knowl- grams such as CSI: Crime Scene Investigation (Fr- edge about narrative structure into generic su-

1921 pervised and unsupervised summarization algo- search on narrative structure analysis (Cutting, rithms; (b) we provide a new layer of annotations 2016; Hauge, 2017), screenplay summarization for the CSI corpus, which can be used for research (Gorinski and Lapata, 2015), and neural network in long-form summarization; and (c) we demon- modeling (Dong, 2018). We focus on extractive strate that narrative structure can facilitate screen- summarization and our goal is to identify an op- summarization; our analysis shows that key timal sequence of key events in a narrative. We events identified in the latent space correlate with aim to create summaries which re-tell the of a important summary content. story in a concise manner. Inspired by recent neu- ral network-based approaches (Cheng and Lapata, 2 Related Work 2016; Nallapati et al., 2017; Zhou et al., 2018; Zheng and Lapata, 2019), we develop supervised A large body of previous work has focused on the and unsupervised models for our summarization computational analysis of narratives (Mani, 2012; task based on neural representations of scenes Richards et al., 2009). Attempts to analyze how and how these relate to the screenplay’s narra- stories are written have been based on sequences tive structure. Contrary to most previous work of events (Schank and Abelson, 1975; Chambers which has focused on characters, we select sum- and Jurafsky, 2009), plot units (McIntyre and Lap- mary scenes based on events and their importance ata, 2010; Goyal et al., 2010; Finlayson, 2012) and in the story. Our definition of narrative structure their structure (Lehnert, 1981; Rumelhart, 1980), closely follows Papalampidi et al.(2019). How- as well as on characters or personas in a narrative ever, the model architectures we propose are gen- (Black and Wilensky, 1979; Propp, 1968; Bam- eral and could be adapted to different plot analysis man et al., 2014, 2013; Valls-Vargas et al., 2014) schemes (Field, 2005; Vogler, 2007). To overcome and their relationships (Elson et al., 2010; Agarwal the difficulties in evaluating summaries for longer et al., 2014; Srivastava et al., 2016). narratives, we also release a corpus of screenplays As mentioned earlier, work on summarization with scenes labeled as important (summary wor- of narratives has had limited appeal, possibly due thy). Our annotations augment an existing dataset to the lack of annotated data for modeling and based on CSI episodes (Frermann et al., 2018), evaluation. Kazantseva and Szpakowicz(2010) which was originally developed for incremental summarize short stories based on importance cri- natural language understanding. teria (e.g., whether a segment contains or location information); they create summaries to 3 Problem Formulation help readers decide whether they are interested in reading the whole story, without revealing its plot. Let D denote a screenplay consisting of a se- Mihalcea and Ceylan(2007) summarize books quence of scenes D = {s1,s2,...,sn}. Our aim is 0 with an unsupervised graph-based approach op- to select a subset D = {si,...,sk} consisting of erating over segments (i.e., topical units). Their the most informative scenes (where k < n). Note algorithm first generates a summary for each seg- that this definition produces extractive summaries; ment and then an overall summary by collecting we further assume that selected scenes are pre- sentences from the individual segment summaries. sented according to their order in the screenplay. Focusing on screenplays, Gorinski and Lapata We next discuss how summaries can be created us- (2015) generate a summary by extracting an opti- ing both unsupervised and supervised approaches, mal chain of scenes via a graph-based approach and then move on to explain how these are adapted centered around the main characters. In a sim- to incorporate narrative structure. ilar fashion, Tsoneva et al.(2007) create video summaries for TV series episodes; their algorithm 3.1 Unsupervised Screenplay Summarization ranks sub-scenes in terms of importance using fea- Our unsupervised model is based on an extension tures based on graphs and textual cues of TEXTRANK (Mihalcea and Tarau, 2004; Zheng available in the subtitles and movie scripts. Vicol and Lapata, 2019), a well-known algorithm for ex- et al.(2018) introduce the MovieGraphs dataset, tractive single-document summarization. In our which also uses character-centered graphs to de- , a screenplay is represented as a graph, in scribe the content of movie video clips. which nodes correspond to scenes and edges be- Our work synthesizes various strands of re- tween scenes si and s j are weighted by their simi-

1922 larity ei j. A node’s centrality (importance) is mea- We encode the screenplay with a BiLSTM net- 0 sured by computing its degree: work and obtain contextualized representations si for scenes s by concatenating the hidden layers of i −→ ←− centrality(si) = λ1 ∑ ei j + λ2 ∑ ei j (1) the forward hi and backward hi LSTM, respec- ji 0 −→ ←− 0 tively: si = [hi ; hi ]. The vector si therefore repre- sents the content of the ith scene. where λ1 +λ2 = 1. The modification introduced in Zheng and Lapata(2019) takes directed edges into We also estimate the salience of scene si by account, capturing the intuition that the centrality measuring its similarity with a global screenplay of any two nodes is influenced by their relative po- content representation d. The latter is the weighted sition. Also note that the edges of preceding and sum of all scene representations s1,s2,...,sn. We calculate the semantic similarity between s0 and d following scenes are differentially weighted by λ1 i by computing the element-wise dot product bi, co- and λ2. sine similarity c , and pairwise distance u between Although earlier implementations of TEXT- i i their respective vectors: RANK (Mihalcea and Tarau, 2004) compute node similarity based on symbolic representations such 0 0 si · d as tf*idf, we adopt a neural approach. Specifically, bi = s d ci = (2) i s0 kdk we obtain sentence representations based on a pre- i 0 trained encoder. In our experiments, we rely on si · d ui = 0 (3) the Universal Sentence Encoder (USE; Cer et al. max(ksik2 · kdk2) 2018), however, other embeddings are possible.1 v s We represent a scene by the mean of its sentence The salience i of scene i is the concatenation of the similarity metrics: vi = [bi;ci;ui]. The content representations and measure scene similarity ei j vector s0 and the salience vector v are concate- using cosine.2 As in the original TEXTRANK al- i i nated and fed to a single neuron that outputs the gorithm (Mihalcea and Tarau, 2004), scenes are 3 ranked based on their centrality and the M most probability of a scene belonging to the summary. central ones are selected to appear in the summary. 3.3 Narrative Structure 3.2 Supervised Screenplay Summarization We now explain how to inject knowledge about narrative structure into our summarization models. Most extractive models frame summarization as For both models, such knowledge is transferred a classification problem. Following a recent ap- via a network pre-trained on the TRIPOD4 dataset proach (SUMMARUNNER; Nallapati et al. 2017), introduced by Papalampidi et al.(2019). This we use a neural network-based encoder to build dataset contains 99 movies annotated with turning representations for scenes and apply a binary clas- points. TPs are key events in a narrative that define sifier over these to predict whether they should the progression of the plot and occur between con- be in the summary. For each scene si ∈ D, we secutive acts (thematic units). It is often assumed predict a label yi ∈ {0,1} (where 1 means that (Cutting, 2016) that there are six acts in a film si must be in the summary) and assign a score (Figure1), each delineated by a turning point (ar- p(yi|si,D,θ) quantifying si’s relevance to the sum- rows in the figure). Each of the five TPs has also a mary (θ denotes model parameters). We assem- well-defined function in the narrative: we present ble a summary by selecting M sentences with the each TP alongside with its definition as stated in top p(1|si,D,θ). theory (Hauge, 2017) and adopted We calculate sentence representations via the by Papalampidi et al.(2019) in Table1 (see Ap- pre-trained USE encoder (Cer et al., 2018); a scene pendixA for a more detailed of nar- is represented as the weighted sum of the repre- rative structure theory). Papalampidi et al.(2019) sentations of its sentences, which we obtain from identify scenes in movies that correspond to these a BiLSTM equipped with an attention mechanism. key events as a means for analyzing the narrative Next, we compute richer scene representations by modeling surrounding context of a given scene. 3Aside from salience and content, Nallapati et al.(2017) take into account novelty and position-related features. We 1USE performed better than BERT in our experiments. ignore these as they are specific to news articles and denote 2We found cosine to be particularly effective with USE the modified model as SUMMARUNNER*. representations; other metrics are also possible. 4https://github.com/ppapalampidi/TRIPOD

1923 Turning Point Definition Summarization with Narrative Structure).5 We Introductory event that occurs after TP1: Opportunity the presentation of the story setting. first present an unsupervised variant which mod- Event where the main goal of the ifies the computation of scene centrality in the di- TP2: Change of Plans story is defined. rected version of TEXTRANK (Equation (1)). Event that pushes the main charac- TP3: Point of No Return ter(s) to fully commit to their goal. Specifically, we use the pre-trained network de- Event where everything falls apart scribed in Section 3.3 to obtain TP-specific at- TP4: Major Setback (temporarily or permanently). tention distributions. We then select an overall Final event of the main story, mo- TP5: Climax ment of resolution. score fi for each scene (denoting how likely it is to act as a TP). We set f = max p , i.e., to Table 1: Turning points and their definitions as given i j∈[1,5] i j in Papalampidi et al.(2019) the pi j value that is highest across TPs. We incor- porate these scores into centrality as follows: structure of movies. They collect sentence-level centrality(si)=λ1∑(ei j+ f j)+λ2∑(ei j+ fi) (6) TP annotations for plot synopses and subsequently ji project them via distant supervision onto screen- plays, thereby creating silver-standard labels. We Intuitively, we add the f j term in the forward sum utilize this silver-standard dataset in order to pre- in order to incrementally increase the centrality train a network which performs TP identification. scores of scenes as the story moves on and we en- counter more TP events (i.e., we move to later sec- TP Identification Network We first encode tions in the narrative). At the same time, we add screenplay scenes via a BiLSTM equipped with an the fi term in the backward sum in order to also attention mechanism. We then contextualize them increase the scores of scenes identified as TPs. with respect to the whole screenplay via a second Supervised SUMMER We also propose a su- BiLSTM. Next, we compute topic-aware scene pervised variant of SUMMER following the basic representations ti via a context interaction layer model formulation in Section 3.3. We still repre- (CIL) as proposed in Papalampidi et al.(2019). sent a scene as the concatenation of a content vec- CIL is inspired by traditional segmentation ap- tor s0 and salience vector v0, which serve as input proaches (Hearst, 1997) and measures the seman- to a binary classifier. However, we now modify tic similarity of the current scene with a preceding how salience is determined; instead of comput- and following context window in the screenplay. ing a general global content representation d for Hence, the topic-aware scene representations also the screenplay, we identify a sequence of TPs and encode the degree to which each scene acts as a measure the semantic similarity of each scene with topic boundary in the screenplay. this sequence. Our model is depicted in Figure3. In the final layer, we employ TP-specific atten- We utilize the pre-trained TP network (Fig- tion mechanisms to compute the probability pi j th ures3(a) and (b)) to compute sparse attention that scene ti represents the j TP in the screen- scores over scenes. In the supervised setting, play. Note that we expect the TP-specific atten- where gold-standard binary labels provide a train- tion distributions to be sparse, as there are only ing signal, we fine-tune the network in an end-to- a few scenes which are relevant for a TP (recall end fashion on summarization (Figure3(c)). We that TPs are boundary scenes between sections). compute the TP representations via the attention To encourage sparsity, we add a low temperature scores; we calculate a vector t p as the weighted value τ (Hinton et al., 2015) to the softmax part of j sum of all topic-aware scene representations t pro- the attention mechanisms: duced via CIL: t p j = ∑i∈[1,N] pi jti, where N is the number of scenes in a screenplay. In practice, only gi j = tanh(Wjti + b j), g j ∈ [−1,1] (4) T a few scenes contribute to t p j due to the τ param- exp(gi j/τ) eter in the softmax function (Equation (5)). pi j = T , ∑ pi j = 1 (5) ∑t=1 exp(gt j/τ) i=1 A TP-scene interaction layer measures the se- mantic similarity between scenes t and latent TP where W ,b represent the trainable weights of the i j j representations t p (Figure3(c)). Intuitively, a attention layer of the jth TP. j complete summary should contain scenes which

Unsupervised SUMMER We now introduce 5We make our code publicly available at https:// our model, SUMMER (short for Screenplay github.com/ppapalampidi/SUMMER.

1924 (c): Summary scenes prediction when a TP should occur (e.g., the Opportunity oc-

Final scene . curs after the first 10% of a screenplay, Change of y1, ..., yk, ..., yM representations Plans is approximately 25% in). It is reasonable to discourage t p distributions to deviate drastically

TP-scene . . from these expected positions. Focal regulariza- Interaction Layer tion F minimizes the KL divergence DKL between Salience wrt Content each TP attention distribution t pi and its expected plotline position distribution thi: (b): Narrative structure prediction

tp1 tp2 tp3 tp4 tp5 F = ∑ DKL (t pikthi) (8) i∈[1,5] The final loss L is the weighted sum of all three TP1 TP2 TP3 TP4 TP5 components, where a,b are fixed during training: L = BCE + aO + bF. t1 . . . tk . . . tM 4 Experimental Setup Content Interaction Layer Crime Scene Investigation Dataset We per- s'1 . . . s'k . . . s'M formed experiments on an extension of the CSI Screenplay Encoder dataset6 introduced by Frermann et al.(2018). It (BiLSTM) consists of 39 CSI episodes, each annotated with s1 . . . sk . . . sM (a): Scene encoding word-level labels denoting whether the perpetra- tor is mentioned in the utterances characters speak. Figure 3: Overview of SUMMER. We use one We further collected scene-level binary labels in- TP-specific attention mechanism per turning point in order to acquire TP-specific distributions over scenes. dicating whether episode scenes are important and We then compute the similarity between TPs and con- should be included in a summary. Three human textualized scene representations. Finally, we perform judges performed the annotation task after watch- max pooling over TP-specific similarity vectors and ing the CSI episodes scene-by-scene. To facilitate concatenate the final similarity representation with the the annotation, judges were asked to indicate why contextualized scene representation. they thought a scene was important, citing the fol- lowing reasons: it revealed (i) the victim, (ii) the are related to at least one of the key events in cause of death, (iii) an autopsy report, (iv) crucial the screenplay. We calculate the semantic similar- evidence, (v) the perpetrator, and (vi) the motive or ity vi j of scene ti with TP t p j as in Equations (2) the relation between perpetrator and victim. An- and (3). We then perform max pooling over vec- notators were free to select more than one or none tors vi1,...,viT , where T is the number of TPs of the listed reasons where appropriate. We can 0 (i.e., five) and calculate a final similarity vector vi think of these reasons as high-level aspects a good for the ith scene. summary should cover (for CSI and related crime The model is trained end-to-end on the summa- series). Annotators were not given any informa- rization task using BCE, the binary cross-entropy tion about TPs or narrative structure; the annota- loss function. We add an extra regularization term tion was not guided by theoretical considerations, to this objective to encourage the TP-specific at- rather our aim was to produce useful CSI sum- tention distributions to be orthogonal (since we maries. Table2 presents the dataset statistics (see want each attention layer to attend to different also AppendixB for more detail). parts of the screenplay). We thus maximize the Implementation Details In order to set the hy- Kullback-Leibler (KL) divergence DKL between perparameters of all proposed networks, we used all pairs of TP attention distributions t p , i ∈ [1,5]: i a small development set of four episodes from the 1 CSI dataset (see AppendixB for details). After ex- O = ∑ ∑ log  (7) perimentation, we set the temperature τ of the soft- DKL t pi t p j + ε i∈[1,5] j∈[1,5], j6=i max layers for the TP-specific attentions (Equa- Furthermore, we know from screenwriting theory tion (5)) to 0.01. Since the binary labels in the (Hauge, 2017) that there are rules of thumb as to 6https://github.com/EdinburghNLP/csi-corpus

1925 overall Model F1 episodes 39 Lead 30% 30.66 scenes 1544 Last 30% 39.85 summary scenes 454 Mixed 30% 34.32 per episode TEXTRANK, undirected, tf*idf 32.11 scenes 39.58 (6.52) TEXTRANK, directed, neural 41.75 crime-specific aspects 5.62 (0.24) TEXTRANK, directed, expected TP positions 41.05 summary scenes 11.64 (2.98) SCENESUM, directed, character-based weights 42.02 summary scenes (%) 29.75 (7.35) SUMMER 44.70 sentences 822.56 (936.23) Table 3: Unsupervised screenplay summarization. tokens 13.27k (14.67k) per episode scene Coverage # scenes F1 sentences 20.78 (35.61) of aspects per TP tokens 335.19 (547.61) Lead 30% 30.66 – – tokens per sentence 16.13 (16.32) Last 30% 39.85 – – Table 2: CSI dataset statistics; means and (std). Mixed 30% 34.32 – – SUMMARUNNER* 48.56 – – SCENESUM 47.71 – – SUMMER, fixed one-hot TPs 46.92 63.11 1.00 supervised setting are imbalanced, we apply class SUMMER, fixed distributions 47.64 67.01 1.05 weights to the binary cross-entropy loss of the re- SUMMER, −P, −R 51.93 44.48 1.19 spective models. We weight each class by its in- SUMMER, −P, +R 49.98 51.96 1.14 SUMMER, +P, −R 50.56 62.35 3.07 verse frequency in the training set. Finally, in su- SUMMER, +P, +R 52.00 70.25 1.20 pervised SUMMER, where we also identify the nar- Table 4: Supervised screenplay summarization; for in rative structure of the screenplays, we consider as SUMMER variants, we also report the percentage of as- key events per TP the scenes that correspond to an pect labels covered by latent TP predictions. attention score higher than 0.05. More implemen- tation details can be found in AppendixC. tion in Gorinski and Lapata(2015): As shown in Table2, the gold-standard sum- maries in our dataset have a compression rate of ∑c∈C [c ∈ S ∪ main(C)] ci = (9) approximately 30%. During inference, we select ∑c∈C [c ∈ S] the top M scenes as the summary, such that they correspond to 30% of the length of the episode. where S is the set of all characters participating in scene si, C is the set of all characters participat- 5 Results and Analysis ing in the screenplay and main(C) are all the main characters of the screenplay. We retrieve the set Is Narrative Structure Helpful? We perform of main characters from the IMDb page of the re- 10-fold cross-validation and evaluate model per- spective episode. We also note that human agree- formance in terms of F1 score. Table3 sum- ment for our task is 79.26 F1 score, as measured marizes the results of unsupervised models. We on a small subset of the corpus. present the following baselines: Lead 30% se- As shown in Table3,S UMMER achieves the lects the first 30% of an episode as the summary, best performance (44.70 F1 score) among all mod- Last 30% selects the last 30%, and Mixed 30%, els and is superior to an equivalent model which randomly selects 15% of the summary from uses expected TP positions or a character-based the first 30% of an episode and 15% from the representation. This indicates that the pre-trained last 30%. We also compare SUMMER against network provides better predictions for key events TEXTRANK based on tf*idf (Mihalcea and Ta- than position and character heuristics, even though rau, 2004), the directed neural variant described there is a domain shift from Hollywood movies in Section 3.1 without any TP information, a in the TRIPOD corpus to episodes of a crime variant where TPs are approximated by their ex- series in the CSI corpus. Moreover, we find pected position as postulated in screenwriting the- that the directed versions of TEXTRANK are bet- ory, and a variant that incorporates information ter at identifying important scenes than the undi- about characters (Gorinski and Lapata, 2015) in- rected version. We found that performance peaks stead of narrative structure. For the character- with λ1 = 0.7 (see Equation (6)), indicating that based TEXTRANK, called SCENESUM, we substi- higher importance is given to scenes as the story tute the fi, f j scores in Equation (6) with character- progresses (see AppendixD for experiments with related importance scores ci similar to the defini- different λ values).

1926 In Table4, we report results for supervised or provides important evidence). Specifically, we models. Aside from the various baselines in the compute the extent to which these aspects overlap first block of the table, we compare the neural with the TPs predicted by SUMMER as: extractive model SUMMARUNNER*7 (Nallapati ∑A ∈A∑TP ∈TP [dist(TPj,Ai)≤1] et al., 2017) presented in Section 3.2 with sev- C= i j (10) eral variants of our model SUMMER. We exper- |A| imented with randomly initializing the network where A is the set of all aspect scenes, |A| is the for TP identification (−P) and with using a pre- number of aspects, TP is the set of scenes inferred trained network (+P). We also experimented with as TPs by the model, Ai and TPj are the subsets removing the regularization terms, O and F (Equa- of scenes corresponding to the ith aspect and jth tions (7) and (8)) from the loss (−R). We as- TP, respectively, and dist(TPj,Ai) is the minimum sess the performance of SUMMER when we follow distance between TPj and Ai in number of scenes. a two-step approach where we first predict TPs The proportion of aspects covered is given in via the pre-trained network and then train a net- Table4, middle column. We find that coverage is work on screenplay summarization based on fixed relatively low (44.48%) for the randomly initial- TP representations (fixed one-hot TPs), or alter- ized SUMMER with no regularization. There is a natively use expected TP position distributions as slight improvement of 7.48% when we force the postulated in screenwriting theory (fixed distribu- TP-specific attention distributions to be orthogo- tions). Finally, we incorporate character-based in- nal and close to expected positions. Pre-training formation into our baseline and create a supervised and regularization provide a significant boost, in- version of SCENESUM. We now utilize the charac- creasing coverage to 70.25%, while pre-trained ter importance scores per scene (Equation (9)) as SUMMER without regularization infers on aver- attention scores – instead of using a trainable at- age more scenes representative of each TP. This tention mechanism – when computing the global shows that the orthogonal constraint also encour- screenplay representation d (Section 3.2). ages sparse attention distributions for TPs. Table4 shows that all end-to-end S UMMER Table5 shows the degree of association be- variants outperform SUMMARUNNER*. The tween individual TPs and summary aspects (see best result (52.00 F1 Score) is achieved by pre- AppendixD for illustrated examples). We observe trained SUMMER with regularization, outperform- that Opportunity and Change of Plans are mostly ing SUMMARUNNER* by an absolute difference associated with information about the crime scene of 3.44. The randomly initialized version with and the victim, Climax is focused on the revelation no regularization achieves similar performance of the motive, while information relating to cause (51.93 F1 score). For summarizing screenplays, of death, perpetrator, and evidence is captured by explicitly encoding narrative structure seems to both Point of no Return and Major Setback. Over- be more beneficial than general representations all, the generic Hollywood-inspired TP labels are of scene importance. Finally, two-step versions adjusted to our genre and describe crime-related of SUMMER perform poorly, which indicates that key events, even though no aspect labels were pro- end-to-end training and fine-tuning of the TP iden- vided to our model during training. tification network on the target dataset is crucial. Do Humans Like the Summaries? We also What Does the Model Learn? Apart from per- conducted a human evaluation experiment using formance on summarization, we would also like to the summaries created for 10 CSI episodes.8 We examine the quality of the TPs inferred by SUM- produced summaries based on the gold-standard MER (supervised variant). Problematically, we do annotations (Gold), SUMMARUNNER*, and the not have any gold-standard TP annotation in the supervised version of SUMMER. Since 30% of CSI corpus. Nevertheless, we can implicitly assess an episode results in lengthy summaries (15 min- whether they are meaningful by measuring how utes on average), we further increased the com- well they correlate with the reasons annotators cite pression rate for this experiment by limiting each to justify their decision to include a scene in the summary to six scenes. For the gold standard con- summary (e.g., because it reveals cause of death dition, we randomly selected exactly one scene

7Our adaptation of SUMMARUNNER that considers con- 8https://github.com/ppapalampidi/SUMMER/tree/ tent and salience vectors for scene selection. master/video_summaries

1927 Turning Point Crime scene Victim Death Cause Perpetrator Evidence Motive Opportunity 56.76 52.63 15.63 15.38 2.56 0.00 Change of Plans 27.03 42.11 21.88 15.38 5.13 0.00 Point of no Return 8.11 13.16 9.38 25.64 48.72 5.88 Major Setback 0.00 0.00 6.25 10.25 48.72 35.29 Climax 2.70 0.00 6.25 2.56 23.08 55.88

Table 5: Percentage of aspect labels covered per TP for SUMMER, +P, +R. System Crime scene Victim Death Cause Perpetrator Evidence Motive Overall Rank SUMMARUNNER* 85.71 93.88 75.51 81.63 59.18 38.78 72.45 2.18 SUMMER 89.80 87.76 83.67 81.63 77.55 57.14 79.59 2.00 Gold 89.80 91.84 71.43 83.67 65.31 57.14 76.53 1.82 Table 6: Human evaluation: percentage of yes answers by AMT workers regarding each aspect in a summary. All differences in (average) Rank are significant (p < 0.05, using a χ2 test). per aspect. For SUMMARUNNER* and SUMMER can be integrated with supervised and unsuper- we selected the top six predicted scenes based vised extractive summarization algorithms. Ex- on their posterior probabilities. We then created periments on the CSI corpus showed that this video summaries by isolating and merging the se- scheme transfers well to a different genre (crime lected scenes in the raw video. investigation) and that utilizing narrative struc- We asked Amazon Mechanical Turk (AMT) ture boosts summarization performance, leading workers to watch the video summaries for all sys- to more complete and diverse summaries. Anal- tems and rank them from most to least informa- ysis of model output further revealed that latent tive. They were also presented with six questions events encapsulated by turning points correlate relating to the aspects the summary was supposed with important aspects of a CSI summary. to cover (e.g., Was the victim revealed in the sum- Although currently our approach relies solely mary? Do you know who the perpetrator was?). on textual information, it would be interesting to They could answer Yes, No, or Unsure. Five work- incorporate additional modalities such as video or ers evaluated each summary. audio. Audiovisual information could facilitate Table6 shows the proportion of times partic- the identification of key events and scenes. Be- ipants responded Yes for each aspect across the sides narrative structure, we would also like to ex- three systems. Although SUMMER does not im- amine the role of emotional arcs (Vonnegut, 1981; prove over SUMMARUNNER* in identifying ba- Reagan et al., 2016) in a screenplay. An often in- sic information (i.e., about the victim and perpe- tegral part of a compelling story is the emotional trator), it creates better summaries overall with experience that is evoked in the reader or viewer more diverse content (i.e., it more frequently in- (e.g., somebody gets into trouble and then out of it, cludes information about cause of death, evidence, somebody finds something wonderful, loses it, and and motive). This observation validates our as- then finds it again). Understanding emotional arcs sumption that identifying scenes that are semanti- may be useful to revealing a story’s shape, high- cally close to the key events of a screenplay leads lighting important scenes, and tracking how the to more complete and detailed summaries. Fi- story develops for different characters over time. nally, Table6 also lists the average rank per system Acknowledgments (lower is better), which shows that crowdwork- ers like gold summaries best, SUMMER is often We thank the anonymous reviewers for their feed- ranked second, followed by SUMMARUNNER* in back. We gratefully acknowledge the support of third place. the European Research Council (Lapata; award 681760, “Translating Multiple Modalities into 6 Conclusions Text”) and of the Leverhulme Trust (Keller; award IAF-2017-019). In this paper we argued that the underlying struc- ture of narratives is beneficial for long-form sum- marization. We adapted a scheme for identifying References narrative structure (i.e., turning points) in Holly- Apoorv Agarwal, Sriramkumar Balasubramanian, wood movies and showed how this information Jiehan Zheng, and Sarthak Dash. 2014. Parsing

1928 Screenplays for Extracting Social Networks from . 2005. Screenplay: Foundations of Screen- Movies. In Proceedings of the 3rd Workshop on writing. Dell Publishing Company. Computational Linguistics for , pages 50– 58, Gothenburg, Sweden. Mark Alan Finlayson. 2012. Learning Narrative Struc- ture from Annotated Folktales. Ph.D. thesis, Mas- David Bamman, Brendan O’Connor, and Noah A. sachusetts Institute of Technology. Smith. 2013. Learning latent personas of film char- acters. In Proceedings of the 51st Annual Meeting of Lea Frermann, Shay B Cohen, and Mirella Lapata. the Association for Computational Linguistics (Vol- 2018. Whodunnit? crime drama as a case for natural ume 1: Long Papers), pages 352–361, Sofia, Bul- language understanding. Transactions of the Associ- garia. ation of Computational Linguistics, 6:1–15. David Bamman, Ted Underwood, and Noah A. Smith. Gustav Freytag. 1896. Freytag’s technique of the 2014. A Bayesian mixed effects model of literary drama: an of dramatic and character. In Proceedings of the 52nd Annual Meet- art. Scholarly Press. ing of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 370–379, Bal- Philip John Gorinski and Mirella Lapata. 2015. Movie timore, Maryland. script summarization as graph-based scene extrac- tion. In Proceedings of the 2015 Conference of John B Black and Robert Wilensky. 1979. An the North American Chapter of the Association for evaluation of story grammars. Cognitive science, Computational Linguistics: Human Language Tech- 3(3):213–229. nologies, pages 1066–1076, Denver, Colorado. As- sociation for Computational Linguistics. Charles Oscar Brink. 2011. Horace on : The’Ars Poetica’. Cambridge University Press. Amit Goyal, Ellen Riloff, and Hal Daume´ III. 2010. Automatically producing plot unit representations Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, for narrative text. In Proceedings of the 2010 Con- Nicole Limtiaco, Rhomni St John, Noah Constant, ference on Empirical Methods in Natural Language Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Processing, pages 77–86, Cambridge, MA. et al. 2018. Universal sentence encoder. arXiv preprint arXiv:1803.11175. Max Grusky, Mor Naaman, and Yoav Artzi. 2018. NEWSROOM: A dataset of 1.3 million summaries Nathanael Chambers and Dan Jurafsky. 2009. Unsu- with diverse extractive strategies. In Proceedings pervised learning of narrative schemas and their par- of the 16th Annual Conference of the North Amer- ticipants. In Proceedings of the Joint Conference of ican Chapter of the Association for Computational the 47th Annual Meeting of the ACL and the 4th In- Linguistics: Human Language Technologies, pages ternational Joint Conference on Natural Language 708–719, New Orleans, USA. Processing of the AFNLP, pages 602–610, Suntec, Singapore. Michael Hauge. 2017. Made Easy: Per- suade and Transform Your , Buyers, and Jianpeng Cheng and Mirella Lapata. 2016. Neural Clients – Simply, Quickly, and Profitably. Indie summarization by extracting sentences and words. Books International. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume Marti A Hearst. 1997. Texttiling: Segmenting text into 1: Long Papers), pages 484–494. multi-paragraph subtopic passages. Computational linguistics, 23(1):33–64. James E Cutting. 2016. Narrative theory and the dy- namics of popular movies. Psychonomic bulletin & Karl Moritz Hermann, Toma´sˇ Kociskˇ y,´ Edward review, 23(6):1713–1743. Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. Teaching ma- Yue Dong. 2018. A survey on neural network-based chines to read and comprehend. In Advances in Neu- summarization methods. ArXiv, abs/1804.04589. ral Information Processing Systems 28, pages 1693– 1701. Morgan, Kaufmann. David K. Elson, Nicholas Dames, and Kathleen R. Mckeown. 2010. Extracting social networks from Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. literary fiction. In Proceedings of the 48th Annual Distilling the knowledge in a neural network. arXiv Meeting of the Association for Computational Lin- preprint arXiv:1503.02531. guistics, pages 138–147, Uppsala, Sweden. Anna Kazantseva and Stan Szpakowicz. 2010. Sum- Alexander Fabbri, Irene Li, Tianwei She, Suyi Li, and marizing short stories. Computational Linguistics, Dragomir Radev. 2019. Multi-news: A large-scale 36(1):71–109. multi-document summarization dataset and abstrac- tive hierarchical model. In Proceedings of the 57th Joy Kim and Andres´ Monroy-Hernandez.´ 2015. Sto- Annual Meeting of the Association for Computa- ria: Summarizing social media content based on tional Linguistics, pages 1074–1084, Florence, Italy. narrative theory using crowdsourcing. CoRR, Association for Computational Linguistics. abs/1509.03026.

1929 Diederik P Kingma and Jimmy Ba. 2014. Adam: A Adam Paszke, Sam Gross, Soumith Chintala, Gre- method for stochastic optimization. arXiv preprint gory Chanan, Edward Yang, Zachary DeVito, Zem- arXiv:1412.6980. ing Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. Wendy G. Lehnert. 1981. Plot units and narrative sum- marization. Cognitive Science, 5(4):293–331. Patrice Pavis. 1998. Dictionary of the theatre: Terms, concepts, and analysis. University of Toronto Press. Yang Liu and Mirella Lapata. 2019. Hierarchical trans- formers for multi-document summarization. In Pro- Vladimir Iakovlevich Propp. 1968. Morphology of the ceedings of the 57th Annual Meeting of the Asso- Folktale. University of Texas. ciation for Computational Linguistics, pages 5070– 5081, Florence, Italy. Association for Computa- Andrew J. Reagan, Lewis Mitchell, Dilan Kiley, tional Linguistics. Christopher M. Danforth, and Peter Sheridan Dodds. 2016. The emotional arcs of stories are dominated Interjeet Mani. 2012. Computational Modeling of by six basic shapes. EPJ Data Science, 5(31):1–12. Narative. Synthesis Lectures on Human Language Technologies. Morgan and Claypool Publishers. Whitman Richards, Mark Alan Finlayson, and Patrick Henry Winston. 2009. Advancing com- Neil McIntyre and Mirella Lapata. 2010. Plot induc- putational models of narrative. Technical Report tion and evolutionary search for story generation. In 63:2009, MIT Computer Science and Atrificial In- Proceedings of the 48th Annual Meeting of the Asso- telligence Laboratory. ciation for Computational Linguistics, pages 1562– 1572, Uppsala, Sweden. David E. Rumelhart. 1980. On evaluating story gram- mars. Cognitive Science, 4(3):313–316. Rada Mihalcea and Hakan Ceylan. 2007. Explorations in automatic book summarization. In Proceedings Evan Sandhaus. 2008. The New York Times Annotated of the 2007 joint conference on empirical methods Corpus. Linguistic Data Consortium, Philadelphia, in natural language processing and computational 6(12). natural language learning (EMNLP-CoNLL), pages Roger C. Schank and Robert P. Abelson. 1975. Scripts, 380–389. plans, and knowledge. In Proceedings of the 4th Rada Mihalcea and Paul Tarau. 2004. Textrank: Bring- International Joint Conference on Artificial Intelli- ing order into text. In Proceedings of the 2004 con- gence, pages 151–157, Tblisi, USSR. ference on empirical methods in natural language Shashank Srivastava, Snigdha Chaturvedi, and Tom processing, pages 404–411. Mitchell. 2016. Inferring interpersonal relations in Proceedings of the 13th Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. narrative summaries. In AAAI Conference on Artificial Intelligence Summarunner: A recurrent neural network based se- , pages quence model for extractive summarization of docu- 2807–2813, Phoenix, Arizona. AAAI Press. ments. In Thirty-First AAAI Conference on Artificial Kristin Thompson. 1999. Storytelling in the new Holly- Intelligence. wood: Understanding classical narrative technique. Harvard University Press. Ramesh Nallapati, Bowen Zhou, Caglar Gulcehre, Bing Xiang, et al. 2016. Abstractive text summa- Tsvetomira Tsoneva, Mauro Barbieri, and Hans Weda. rization using sequence-to-sequence rnns and be- 2007. Automated summarization of narrative video yond. arXiv preprint arXiv:1602.06023. on a semantic level. In International Conference on Semantic Computing (ICSC 2007), pages 169–176. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. IEEE. 2018. Don’t give me the details, just the summary! topic-aware convolutional neural networks for ex- Josep Valls-Vargas, J. Zhu, and Santiago Ontanon. treme summarization. In Proceedings of the 2018 2014. Toward automatic role identification in unan- Conference on Empirical Methods in Natural Lan- notated folk tales. In Proceedings of the 10th AAAI guage Processing, pages 1797–1807, Brussels, Bel- Conference on Artificial Intelligence and Interactive gium. Association for Computational Linguistics. Digital Entertainment, pages 188–194. Sinno Jialin Pan and Qiang Yang. 2010. A survey on Paul Vicol, Makarand Tapaswi, Lluis Castrejon, and transfer learning. IEEE Transactions on Knowledge Sanja Fidler. 2018. Moviegraphs: Towards under- and Data Engineering, 22(10):1345–1359. standing human-centric situations from videos. In Proceedings of the IEEE Conference on Computer Pinelopi Papalampidi, Frank Keller, and Mirella La- Vision and Pattern Recognition, pages 8581–8590. pata. 2019. Movie plot analysis via turning point identification. In Proceedings of the 2019 Confer- Christopher Vogler. 2007. ’s Journey: Mythic ence on Empirical Methods in Natural Language Structure for Writers. Michael Wiese Productions. Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- Kurt Vonnegut. 1981. Palm Sunday. RosettaBooks IJCNLP), pages 1707–1717. LLC, New York.

1930 Crime scene Motive Directed neural 44 TextRank SUMMER 12.4% 10.9% 42

Victim 40 14.6% 38 36.1% Evidence 36 14.4% F1 score (%)

Perpetrator 11.6% 34

32 Cause of death 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 Figure 4: Average composition of a CSI summary Figure 5: F1 score (%) for directed neural TEXT- based on different crime-related aspects. RANK and SUMMER for unsupervised summarization with respect to different λ1 values. Higher λ1 values correspond to higher importance in the next context for Hao Zheng and Mirella Lapata. 2019. Sentence cen- the centrality computation of a current scene. trality revisited for unsupervised summarization. arXiv preprint arXiv:1906.03508.

Qingyu Zhou, Nan Yang, Furu Wei, Shaohan Huang, B CSI Corpus Ming Zhou, and Tiejun Zhao. 2018. Neural docu- ment summarization by jointly learning to score and select sentences. arXiv preprint arXiv:1807.02305. As described in Section 4, we collected aspect- based summary labels for all episodes in the CSI A Narrative Structure Theory corpus. In Figure4 we illustrate the average com- position of a summary based on the different as- The initial formulation of narrative structure was pects seen in a crime investigation (e.g., crime promoted by , who defined the basic scene, victim, cause of death, perpetrator, evi- triangle-shaped plot structure, that has a beginning dence). Most of these aspects are covered in (), middle () and end () 10–15% of a summary, which corresponds to ap- (Pavis, 1998). However, later theories argued that proximately two scenes in the episode. Only the structure of a play should be more complex the “Evidence” aspect occupies a larger propor- (Brink, 2011) and hence, other schemes (Freytag, tion of the summary (36.1%) corresponding to 1896) were proposed with fine-grained stages and five scenes. However, there exist scenes which events defining the progression of the plot. These cover multiple aspects (an as a result are anno- events are considered as the precursor of turning tated with more than one label) and episodes that points, defined by Thompson(1999) and used in do not include any scenes related to a specific as- modern variations of screenplay theory. Turning pect (e.g., if the murder was a suicide, there is no points are narrative moments from which the plot perpetrator). goes in a different direction. By definition these occur at the junctions of acts. We should note that Frermann et al.(2018) dis- Currently, there are myriad schemes describ- criminate between different cases presented in the ing the narrative structure of films, which are of- same episode in the original CSI dataset. Specif- ten used as a practical guide for ically, there are episodes in the dataset, where ex- (Cutting, 2016). One variation of these modern cept for the primary crime investigation case, a schemes is adopted by Papalampidi et al.(2019), second one is presented occupying a significantly who focus on the definition of turning points and smaller part of the episode. Although in the origi- demonstrate that such events indeed exist in films nal dataset, there are annotations available indicat- and can be automatically identified. According ing which scenes refer to each case, we assume no to the adopted scheme (Hauge, 2017), there are such knowledge treating the screenplay as a single six stages (acts) in a film, namely the setup, the unit — most TV series and movies contain sub- new situation, progress, complications and higher stories. We also hypothesize that the latent iden- stakes, the final push and the aftermath, separated tified TP events in SUMMER should relate to the by the five turning points presented in Table1. primary case.

1931 Figure 6: Examples of inferred TPs alongside with gold-standard aspect-based summary labels in CSI episodes at test time. The TP events are identified in the latent space for the supervised version of SUMMER (+P, +R).

C Implementation Details the preceding ones, since λ1 and λ2 are bounded (λ1 + λ2 = 1). In all unsupervised versions of TEXTRANK and We observe that performance increases when SUMMER we used a threshold h equal to 0.2 for re- higher importance is attributed to screenplay moving weak edges from the corresponding fully scenes as the story moves on (λ > 0.5), whereas connected screenplay graphs. For the supervised 1 for extreme cases (λ1 → 1), where only the later version of SUMMER, where we use additional reg- parts of the story are considered, performance ularization terms in the loss function, we experi- drops. Overall, the same peak appears for both mentally set the weights a and b for the different TEXTRANK and SUMMER when λ ∈ [0.6,0.7], terms to 0.15 and 0.1, respectively. 1 which means that slightly higher importance is at- We used the Adam algorithm (Kingma and Ba, tributed to the screenplay scenes that follow. In- 2014) for optimizing our networks. After experi- tuitively, initial scenes of an episode tend to have mentation, we chose an LSTM with 64 neurons for high similarity with all other scenes in the screen- encoding the scenes in the screenplay and another play, and on their own are not very informative identical one for contextualizing them. For the (e.g., the crime, victim, and suspects are intro- context interaction layer, the window l for comput- duced but the perpetrator is not yet known). As ing the surrounding context of a screenplay scene a result, the undirected version of TEXTRANK was set to 20% of the screenplay length as pro- tends to favor the first part of the story and the re- posed in Papalampidi et al.(2019). Finally, we sulting summary consists mainly of initial scenes. also added a dropout of 0.2. For developing our By adding extra importance to later scenes, we models we used PyTorch (Paszke et al., 2017). also encourage the selection of later events that D Additional Results might be surprising (and hence have lower simi- larity with other scenes) but more informative for We illustrate in Figure5 the performance (F1 the summary. Moreover, in SUMMER, where the score) of the directed neural TEXTRANK and weights change in a systematic manner based on SUMMER models in the unsupervised setting with narrative structure, we also observe that scenes ap- respect to different λ1 values. Higher λ1 values pearing later in the screenplay are selected more correspond to higher importance for the succeed- often for inclusion in the summary. ing scenes and respectively lower importance for As described in detail in Section 3.3, we also

1932 infer the narrative structure of CSI episodes in the supervised version of SUMMER via latent TP rep- resentations. During experimentation (see Sec- tion 5), we found that these TPs are highly corre- lated with different aspects of a CSI summary. In Figure6 we visualize examples of identified TPs on CSI episodes during test time alongside with gold-standard aspect-based summary annotations. Based on the examples, we empirically observe that different TPs tend to capture different types of information helpful for summarizing crime in- vestigation stories (e.g., crime scene, victim, per- petrator, motive).

1933