
An Evolutionary Algorithm for Automatic Summarization Aurelien´ Bossard* and Christophe Rodrigues** *LIASD - EA 4383, Universite´ Paris 8, Saint-Denis, France [email protected] **Leonard´ de Vinci Poleˆ Universitaire, Research Center, Paris La Defense,´ France [email protected] Abstract that allow for evaluating a summary without a human reference (writing it is the most time- This paper proposes a novel method to consuming task in evaluation) emerged and have select sentences for automatic summa- recently achieved good performances, using prob- rization based on an evolutionary algo- abilistic models (Louis and Nenkova, 2009; Sag- rithm. The algorithm explores candidate gion et al., 2010). summaries space following an objective Probabilistic models for automatic summariza- function computed over ngrams probabil- tion evaluation are natural: a summary and its ity distributions of the candidate summary source have to share the same distribution of con- and the source documents. This method cepts. As they have proven performant for evalu- does not consider a summary as a stack ation, using them to guide automatic summariza- of independent sentences but as a whole tion process seems obvious, although it has to our text, and makes use of advances in un- knowledge not been already tested. As opposed to supervised summarization evaluation. We the most part of automatic summarization methods compare this sentence extraction method that use encoded metrics, we here propose to con- to one of the best existing methods which sider automatic summarization as the maximiza- is based on integer linear programming, tion of a natural score: the divergence between and show its efficiency on three different the concepts distribution in the source and the con- acknowledged corpora. cepts distribution of a candidate summary. So we view automatic summarization as choos- 1 Introduction ing the best summary among a very large set of Automatic summarization systems are essential candidate summaries upon a metric that is com- components of information systems. Indeed, in- puted on the whole candidate summary. This leads crease of numerical information sources can have us to the use of an evolutionary algorithm in order a negative effect on online content reading and as- to naturally select the best summaries. similation. Summarizing such content can allow While other recent papers (Li et al., 2013; users to better apprehend it. Automatic summa- Nishikawa et al., 2014; Peyrard and Eckle-Kohler, rization has therefore become one of the first re- 2016) integrate sophisticated and task-specific search in natural language processing field (Luhn, preprocessings and postprocessings, and handle 1958) and still remains a widely spread topic. semantics using complex representations, this pa- In order to validate the benefits obtained from per proposes a new generic and directly usable new methods or parametrization, the automatic sentence extraction method for automatic sum- summarization field needs robust evaluation meth- marization. This method explores the candidate ods. Evaluating automatic summaries, just like summaries space using an evolutionary algorithm. evaluating automatic translation, is a complex This algorithm aims at finding an approximate so- task. Until early 2000s, only two types of ap- lution of the maximization of an objective func- proach existed: entirely manual evaluation with a tion computed over a candidate summary. We first reading grid and semi-automatic evaluations that present iterative methods and exploratory analy- compare automatic summaries with human written sis methods for automatic summarization. We references. Since, entirely automatic approaches then expose our sentence extraction method and 111 Proceedings of Recent Advances in Natural Language Processing, pages 111–120, Varna, Bulgaria, Sep 4–6 2017. https://doi.org/10.26615/978-954-452-049-6_017 the evaluation protocol: it is compared to one of Liu et al.(2006); Nandhini and Balasundaram the best methods in the automatic summarization (2013); Shigematsu and Kobayashi(2014) pro- field. Finally, we discuss our results. pose to look for an approximate solution using a genetic algorithm. As opposed to ILP extrac- 2 Related Work tion methods, these methods are free of any con- straints on the objective function. However, these Automatic summarization Iterative Selection methods keep on considering a summary as a systems generally combine a centrality score for set of independent portions of text, and do not text portions and an extraction method for these take advantage of the new structure of the prob- portions. The first automatic summarization sys- lem that we propose: a summary is not consid- tems (Luhn, 1958; Edmundson, 1969) simply ex- ered as a whole. Considering a summary as a tracted most central portions. MMR method (Car- whole allows for a better space exploration and bonell and Goldstein, 1998) allows for iterative more complex objective functions. Alfonseca and text portions extraction given a centrality score Rodr´ıguez(2003) use a ”standard GA” without de- and a redundancy score. CSIS-based method scribing it and several fitness scores, whose main (Radev, 2000) removes from a list of text por- metric is cosine-tfidf similarity between sources tions sorted by centrality every one that shares too and candidate summaries. However, this similar- much information with a higher ranked. These ity has obtained poor results when used as an au- methods share a major drawback: generated sum- tomatic evaluation metric (Nenkova et al., 2007). maries depend mostly on the first selected text So this metric should not be used as fitness score portion. Therefore, they are exposed to omitting for automatic summarization. As opposed to Al- summaries made of average ranked sentences that fonseca and Rodr´ıguez(2003), we define a new, reflect correctly the overall content of the source non-standard, and fully replicable evolutionary al- documents when combined together. gorithm combined with an extension of an agreed- Optimization Other methods emerged recently upon automatic evaluation metric. to overcome this problem. They consist in explor- Supervised Learning (Litvak et al., 2010; ing the space of all candidate summaries in or- Bossard and Rodrigues, 2011) use genetic algo- der to find the one that maximizes an objective rithms for supervised learning of parameters in function. This problem is exponential in input order to tune automatic summarization systems. sentences as there are Cn candidate summaries m Nishikawa et al.(2014); Takamura and Okumura composed of n sentences for a corpus of m sen- (2010); Sipos et al.(2012) perform structured out- tences. For example, choosing 10 sentences over put learning to maximize ROUGE scores. These 200 leads to 1025 possible solutions. Adding con- approaches suffer from the complexity of machine straints on text portions selection and using ILP1 learning model. Moreover, it requires learning help delimiting the problem and finding (not al- data and is very task-specific. Recently, Peyrard ways) an exact solution (McDonald, 2007; Gillick and Eckle-Kohler(2016) proposed to use an ap- and Favre, 2009b). The search space is lim- proximation of ROUGE-N score combined to an ited by constraints on text portions length and by ILP solver. The approximation of ROUGE-N constraints that avoid including text portions that score is performed with supervised learning. The do not provide additional information. Whereas results obtained outperform state-of-the art meth- Gillick and Favre(2009b) select summaries based ods on DUC 2002 and 2003 corpora. on the maximization of bigram occurrences, Li et al.(2013) try to maximize the similarity be- In contrast to supervised learning, we propose tween summaries and sources bigram frequencies. to use a fully unsupervised method that allows for These methods have proven to be very efficient. summarizing even when no manual reference is However, the use of an ILP solver enforces to available. We here propose to use scoring func- modelize the problem as a linear functions, so does tions based on probabilistic models of source doc- not allow for complex functions that could bet- uments and candidate summaries. The smoothing ter take into account the structure of the automatic used for building probability distributions consid- summarization problem. ers a candidate summary as a whole, not as a set of independent sentences. This complex objec- 1Integer Linear Programming tive function requires a constraint-free optimiza- 112 tion algorithm. So evolutionary algorithms seem 3.1 Unigram Distribution Objective Function appropriate. In the meantime, the scoring func- This objective function (Uniprob) fits the exact au- tions we use allow to benefit more of evolutionary tomatic evaluation function described above and in algorithms. (Louis and Nenkova, 2009). 3 Our Method 3.2 Bigram Simple Sum Objective Function Louis and Nenkova(2009) have proposed an en- Bigram simple sum objective function (Bisimple) tirely unsupervised function for automatic sum- consists in summing all bigram weights in a can- mary evaluation. The evaluation method has didate summary. Candidate summaries get a high proved efficient as it strongly correlates to Pyra- score if they are composed of the most frequent mid score, a semi-automatic score used in TAC bigrams in the source documents. evaluation campaigns to assess automatic sum- maries informational quality (Nenkova et al., 3.3 Bigram Cosine Objective Function 2007). Saggion et al.(2010)
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-