
Topic-Preserving Synthetic News Generation: An Adversarial Deep Reinforcement Learning Approach Ahmadreza Mosallanezhad Kai Shu Huan Liu Arizona State University Illinois Institute of Technology Arizona State University [email protected] [email protected] [email protected] ABSTRACT Recently, social media has proliferated a plethora of disinfor- Nowadays, there exist powerful language models such as OpenAI’s mation and fake news [1, 33]. Recent advancements on language GPT-2 that can generate readable text and can be fine-tuned to models such as OpenAI’s GPT-2 [30] allow ones to generate syn- generate text for a specific domain. Considering GPT-2, it cannot thetic news based on limited information. For example, models directly generate synthetic news with respect to a given topic and like Generative Adversarial Network (GAN) [16] can generate long the output of the language model cannot be explicitly controlled. In readable text from noise, and GPT-2 [30] can write news stories and this paper, we study the novel problem of topic-preserving synthetic fiction stories given simple context such as part of a sentence ora news generation. We propose a novel deep reinforcement learning- topic. One recent model named Grover [37] focuses on generating based method to control the output of GPT-21 with respect to a fake news using causal language models, considering different vari- given news topic. When generating text using GPT-2, by default, ables such as domain, date, authors, and headline. While Grover is the most probable word is selected from the vocabulary. Instead shown effective, it requires many conditional variables to generate of selecting the best word each time from GPT-2’s output, an RL relevant news. To be able to study the problem of machine gener- agent tries to select words that optimize the matching of a given ated news on social media, we propose a model to generate realistic topic. In addition, using a fake news detector as an adversary, we synthetic news. This is a crucial task as it enables us to generate investigate generating realistic news using our proposed method. In synthetic news and study the differences between real and gener- this paper we consider realistic news as news which cannot be easily ated synthetic news. As stated before, one major problem in fake detected by a fake news classifier. Experimental results demonstrate news detection systems is that they cannot differentiate between the effectiveness of the proposed framework on generating topic- human and machine generated text. Advances in language models preserving news content than state-of-the-art baselines. enables one to generate fake news and spread it through social me- dia. To tackle this problem, one major step is being able to generate KEYWORDS synthetic news. By generating synthetic news, one can study the hidden differences between human and machine generated text, Reinforcement Learning, Text Generation, Adversarial Learning hence preventing disinformation in social media. ACM Reference Format: Existing methods may fall short when generating realistic news Ahmadreza Mosallanezhad, Kai Shu, and Huan Liu. 2020. Topic-Preserving controlled by a specific context. In the real-world scenario, fake Synthetic News Generation: An Adversarial Deep Reinforcement Learning news usually has a catchy style and should stay on topic to make Approach. In Woodstock ’18: ACM Symposium on Neural Gaze Detection, June 03–05, 2018, Woodstock, NY. ACM, New York, NY, USA, 10 pages. its audience believe it. For example: “A shocking new report claims https://doi.org/. Kourtney Kardashian’s pregnant again”. Thus, it is important to study the problem of topic-preserving and realistic synthetic news 1 INTRODUCTION generation. Moreover, fine-tuning language models does not help Text generation is an important task for Natural Language Process- us in this matter as it is non-trivial to enforce topic-preservation on ing (NLP). With the rise of deep neural networks such as recurrent a language model directly. In essence, we address the following chal- neural networks (RNNs) and Long Shot Term Memory (LSTM) lenges: (1) how we can generate news contents similar to human cells [14], there has been a huge performance improvement in lan- writing; (2) as training a language model is time consuming and arXiv:2010.16324v1 [cs.CL] 30 Oct 2020 guage modeling and text generation. Text generation has many needs a lot of resources, how we can use a faster approach to gen- different applications such as paraphrase generation and data aug- erate news content; and (3) how we can ensure that the generated mentation. One important application of text generation in NLP is news content is both realistic and related to a given topic. synthetic news content generation [37]. Our solutions to these challenges result in a novel framework RLTG (Reinforcement Learning-based Text Generator), for generat- 1Our model is based on GPT-2 as GPT-3 [4] is not publicly available. ing topic-preserving realistic fake news. The proposed framework Permission to make digital or hard copies of all or part of this work for personal or RLTG consists of three major components: (1) a language model classroom use is granted without fee provided that copies are not made or distributed component to generate a probability distribution over a vocabulary for profit or commercial advantage and that copies bear this notice and the full citation for the next word, given a text input; (2) a Reinforcement Learn- on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, ing (RL) component capable of leveraging the language model to to post on servers or to redistribute to lists, requires prior specific permission and/or a control news generation; and (3) a fake news detection module as fee. Request permissions from [email protected]. an adversary to help the RL agent generating realistic fake news Woodstock ’18, June 03–05, 2018, Woodstock, NY © 2020 Association for Computing Machinery. contents. Our contributions are summarized as follows: ACM ISBN 978-1-4503-XXXX-X/18/06...$15.00 https://doi.org/. Woodstock ’18, June 03–05, 2018, Woodstock, NY Mosallanezhad et al. • We study a novel problem of topic-preserving and realistic syn- Yu et al. propose a novel method named SeqGAN. They apply a thetic news content generation. Generative Adversarial Network [15] to discrete sequence genera- • We propose a principled framework RLTG which uses language tion by directly optimizing the discriminator’s rewards using policy model and deep reinforcement learning along with adversary regu- gradient reinforcement learning [36]. Other approaches use contin- larization to generate realistic synthetic news contents. uous approximation to represent discrete tokens to facilitate the • We conduct experiments on real-world datasets using quantitative gradient propagation process [17, 21]. Continuous approximation and qualitative metrics to demonstrate the effectiveness of RLTG uses the Gumbel-softmax function [19] to transform the one-hot for synthetic news generation. vector into a probabilistic vector that is differentiable for training. Efforts have been made to generate diverse and high quality text [16, 38]. Guo et al. propose a new method for generating long 2 RELATED WORK text using adversarial training. They leverage the hidden states of In this section, we briefly describe the related work on (1) neural an adversary as leaked information in order to optimize a GAN to news generation; (2) adversarial training; and (3) Reinforcement generate long text [16]. To broaden the domains of generated text Learning for Text Generation: Wang et al. propose a method which uses a multi-class classifier as a discriminator. It further uses multiple generators alongside the discriminator to optimize the model [35]. Moreover, Zhang et 2.1 Neural news generation al. propose a novel method, TextGAN, to alleviate the problems of Text generation is a crucial task in Natural Language Processing generating text using GAN. They use LSTM as a generator, and a and is being used in different applications of NLP11 [ , 29]. Many Convolutional Neural Network as a discriminator [38]. early methods for text generation use different techniques to train Generative Adversarial Networks (GAN). As GANs cannot be used for text generation due to the discrete nature of the problem, the 2.3 Reinforcement learning in text generation early works try to solve the problem of back propagation for updat- In the past years, reinforcement learning has shown to be useful in ing the generator. Several methods [13, 16, 26] have been proposed improving model parameters [23, 25]. Furthermore, it can be used to alleviate this problem. MaskGAN [13] tries to generate text using as a standalone algorithm for different purposes such as dialog or both GAN and actor-critic networks. Finally, in LeakGAN [16], un- paraphrase generation. Fedus et al. propose a method for overcom- like other GAN-based methods that the discriminator and generator ing the problems of generating text via GAN. They use reinforce- are trained against each other, it uses the discriminator to help the ment learning to tune parameters for a LSTM based generator [13]. generator predict the next word. Zichao Li et al. proposes a method for generating paraphrase using Newer methods try to leverage Reinforcement Learning (RL) for inverse reinforcement learning. They use an RL setting to tune the text generation problem. [12, 31] uses inverse RL to solve the a generator’s parameter toward generating paraphrases [25]. An- problem of mode collapse in GAN, meaning that during the training other inspiring work by Jwei Li et al. shows using reinforcement of GAN, the discriminator becomes too powerful that we cannot learning we can build an agent capable of engaging in a two person train a generator against it. Another work [20] models the problem dialog [23].
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-