Learning to Select, Track, and Generate for Data-To-Text
Total Page:16
File Type:pdf, Size:1020Kb
Learning to Select, Track, and Generate for Data-to-Text ∗ Hayate Isoy Yui Ueharaz Tatsuya Ishigaki\z Hiroshi Nojiz Eiji Aramakiyz Ichiro Kobayashi[z Yusuke Miyao]z Naoaki Okazaki\z Hiroya Takamura\z yNara Institute of Science and Technology zArtificial Intelligence Research Center, AIST \Tokyo Institute of Technology [Ochanomizu University ]The University of Tokyo fiso.hayate.id3,[email protected] [email protected] fyui.uehara,ishigaki.t,hiroshi.noji,[email protected] [email protected] [email protected] Abstract In addition, the salient part moves as the sum- mary explains the data. For example, when gen- We propose a data-to-text generation model erating a summary of a basketball game (Table1 with two modules, one for tracking and the (b)) from the box score (Table1 (a)), the input other for text generation. Our tracking mod- ule selects and keeps track of salient infor- contains numerous data records about the game: mation and memorizes which record has been e.g., Jordan Clarkson scored 18 points. Existing mentioned. Our generation module generates models often refer to the same data record mul- a summary conditioned on the state of track- tiple times (Puduppully et al., 2019). The mod- ing module. Our model is considered to simu- els may mention an incorrect data record, e.g., late the human-like writing process that gradu- Kawhi Leonard added 19 points: the summary ally selects the information by determining the should mention LaMarcus Aldridge, who scored intermediate variables while writing the sum- mary. In addition, we also explore the ef- 19 points. Thus, we need a model that finds salient fectiveness of the writer information for gen- parts, tracks transitions of salient parts, and ex- eration. Experimental results show that our presses information faithful to the input. model outperforms existing models in all eval- In this paper, we propose a novel data-to- uation metrics even without writer informa- text generation model with two modules, one for tion. Incorporating writer information fur- saliency tracking and another for text generation. ther improves the performance, contributing to The tracking module keeps track of saliency in the content planning and surface realization. input data: when the module detects a saliency 1 Introduction transition, the tracking module selects a new data record1 and updates the state of the tracking mod- Advances in sensor and data storage technolo- ule. The text generation module generates a doc- gies have rapidly increased the amount of data ument conditioned on the current tracking state. produced in various fields such as weather, fi- Our model is considered to imitate the human-like nance, and sports. In order to address the infor- writing process that gradually selects and tracks mation overload caused by the massive data, data- the data while generating the summary. In ad- to-text generation technology, which expresses the dition, we note some writer-specific patterns and contents of data in natural language, becomes characteristics: how data records are selected to be arXiv:1907.09699v1 [cs.CL] 23 Jul 2019 more important (Barzilay and Lapata, 2005). Re- mentioned; and how data records are expressed as cently, neural methods can generate high-quality text, e.g., the order of data records and the word short summaries especially from small pieces of usages. We also incorporate writer information data (Liu et al., 2018). into our model. Despite this success, it remains challenging The experimental results demonstrate that, even to generate a high-quality long summary from without writer information, our model achieves data (Wiseman et al., 2017). One reason for the the best performance among the previous models difficulty is because the input data is too large for in all evaluation metrics: 94.38% precision of re- a naive model to find its salient part, i.e., to deter- lation generation, 42.40% F1 score of content se- mine which part of the data should be mentioned. lection, 19.38% normalized Damerau-Levenshtein ∗Work was done during the internship at Artificial Intel- ligence Research Center, AIST 1We use ‘data record’ and ‘relation’ interchangeably. Distance (DLD) of content ordering, and 16.15% Graves et al., 2016). This module is often used of BLEU score. We also confirm that adding in natural language understanding to keep track writer information further improves the perfor- of the entity state (Kobayashi et al., 2016; Hoang mance. et al., 2018; Bosselut et al., 2018). Recently, entity tracking has been popular for 2 Related Work generating coherent text (Kiddon et al., 2016; Ji 2.1 Data-to-Text Generation et al., 2017; Yang et al., 2017; Clark et al., 2018). Kiddon et al.(2016) proposed a neural checklist Data-to-text generation is a task for generating de- model that updates predefined item states. Ji et al. scriptions from structured or non-structured data (2017) proposed an entity representation for the including sports commentary (Tanaka-Ishii et al., language model. Updating entity tracking states 1998; Chen and Mooney, 2008; Taniguchi et al., when the entity is introduced, their method selects 2019), weather forecast (Liang et al., 2009; Mei the salient entity state. et al., 2016), biographical text from infobox in Our model extends this entity tracking module Wikipedia (Lebret et al., 2016; Sha et al., 2018; for data-to-text generation tasks. The entity track- Liu et al., 2018) and market comments from stock ing module selects the salient entity and appropri- prices (Murakami et al., 2017; Aoki et al., 2018). ate attribute in each timestep, updates their states, Neural generation methods have become the and generates coherent summaries from the se- mainstream approach for data-to-text generation. lected data record. The encoder-decoder framework (Cho et al., 2014; Sutskever et al., 2014) with the attention (Bah- 3 Data danau et al., 2015; Luong et al., 2015) and copy mechanism (Gu et al., 2016; Gulcehre et al., 2016) Through careful examination, we found that in the has successfully applied to data-to-text tasks. original dataset ROTOWIRE, some NBA games However, neural generation methods sometimes have two documents, one of which is sometimes in yield fluent but inadequate descriptions (Tu et al., the training data and the other is in the test or val- 2017). In data-to-text generation, descriptions in- idation data. Such documents are similar to each consistent to the input data are problematic. other, though not identical. To make this dataset Recently, Wiseman et al.(2017) introduced more reliable as an experimental dataset, we cre- the ROTOWIRE dataset, which contains multi- ated a new version. sentence summaries of basketball games with box- We ran the script provided by Wiseman et al. score (Table1). This dataset requires the selection (2017), which is for crawling the ROTOWIRE of a salient subset of data records for generating website for NBA game summaries. The script col- descriptions. They also proposed automatic evalu- lected approximately 78% of the documents in the ation metrics for measuring the informativeness of original dataset; the remaining documents disap- generated summaries. peared. We also collected the box-scores associ- Puduppully et al.(2019) proposed a two-stage ated with the collected documents. We observed method that first predicts the sequence of data that some of the box-scores were modified com- records to be mentioned and then generates a pared with the original ROTOWIRE dataset. summary conditioned on the predicted sequences. The collected dataset contains 3,752 instances Their idea is similar to ours in that the both con- (i.e., pairs of a document and box-scores). How- sider a sequence of data records as content plan- ever, the four shortest documents were not sum- ning. However, our proposal differs from theirs maries; they were, for example, an announcement in that ours uses a recurrent neural network for about the postponement of a match. We thus saliency tracking, and that our decoder dynami- deleted these 4 instances and were left with 3,748 cally chooses a data record to be mentioned with- instances. We followed the dataset split by Wise- out fixing a sequence of data records. man et al.(2017) to split our dataset into train- ing, development, and test data. We found 14 in- 2.2 Memory modules stances that didn’t have corresponding instances in The memory network can be used to maintain the original data. We randomly classified 9, 2, and and update representations of the salient informa- 3 of those 14 instances respectively into training, tion (Weston et al., 2015; Sukhbaatar et al., 2015; development, and test data. Finally, the sizes of TEAM H/V WIN LOSS PTS REB AST FG PCT FG3 PCT ::: The Milwaukee Bucks defeated the New York Knicks, 105-104, at Madison Square Garden on Wednesday. The KNICKS H 16 19 104 46 26 45 46 ::: Knicks (16-19) checked in to Wednesday’s contest looking BUCKS V 18 16 105 42 20 47 32 ::: to snap a five-game losing streak and heading into the fourth quarter, they looked like they were well on their way to that goal. ::: Antetokounmpo led the Bucks with 27 points, 13 PLAYER H/V PTS REB AST BLK STL MIN CITY ::: rebounds, four assists, a steal and three blocks, his second consecutive double-double. Greg Monroe actually checked CARMELO ANTHONY H 30 117 0 2 37 NEW YORK ::: in as the second-leading scorer and did so in his customary DERRICK ROSE H 1534 0 1 33 NEW YORK ::: bench role, posting 18 points, along with nine boards, four COURTNEY LEE H 1123 1 1 38 NEW YORK ::: assists, three steals and a block. Jabari Parker contributed GIANNIS ANTETOKOUNMPO V 27 1343 1 39 MILWAUKEE ::: 15 points, four rebounds, three assists and a steal.