
Neural Deepfake Detection with Factual Structure of Text Wanjun Zhong1∗, Duyu Tang2, Zenan Xu1∗, Ruize Wang3, Nan Duan2, Ming Zhou2 Jiahai Wang1 and Jian Yin1 1 The School of Data and Computer Science, Sun Yat-sen University. Guangdong Key Laboratory of Big Data Analysis and Processing, Guangzhou, P.R.China 2 Microsoft Research 3 Fudan University, Shanghai, P.R.China fzhongwj25@mail2,[email protected] fwangjiah@mail,[email protected] fdutang,nanduan,[email protected] [email protected] Abstract Neural Fake News Zarif: No nuclear enrichment prohibition by Iran Deepfake detection, the task of automatically April 15, 2019 discriminating machine-generated text, is in- … The law currently imposes a 100 million euro ($113 creasingly critical with recent advances in nat- million) fine on Iran in case of a violation, but under the new bill, if Saudis violate the arms embargo, Tehran will ural language generative models. Existing ap- be handed a 2 billion euros fine. … proaches to deepfake detection typically rep- resent documents with coarse-grained repre- sentations. However, they struggle to cap- Figure 1: An example of machine-generated fake news. ture factual structures of documents, which We can observe that the factual structure of entities ex- is a discriminative factor between machine- tracted by named entity recognition is inconsistent. generated and human-written text according to our statistical analysis. To address this, we propose a graph-based model that utilizes 2019), make the situation even severer as their abil- the factual structure of a document for deep- ity to generate fluent and coherent text may enable fake detection of text. Our approach repre- adversaries to produce fake news. In this work, sents the factual structure of a given document as an entity graph, which is further utilized we study deepfake detection of text, to automat- to learn sentence representations with a graph ically discriminate machine-generated text from neural network. Sentence representations are human-written text. then composed to a document representation Previous works on deepfake detection of text are for making predictions, where consistent re- dominated by neural document classification mod- lations between neighboring sentences are se- els (Bakhtin et al., 2019; Zellers et al., 2019; Wang quentially modeled. Results of experiments on two public deepfake datasets show that our et al., 2019; Vijayaraghavan et al., 2020). They approach significantly improves strong base typically tackle the problem with coarse-grained models built with RoBERTa. Model analy- document-level evidence such as dense vectors sis further indicates that our model can dis- learned by neural encoder and traditional features tinguish the difference in the factual structure (e.g., TF-IDF, word counts). However, these coarse- between machine-generated text and human- grained models struggle to capture the fine-grained written text. factual structure of the text. We define the factual structure as a graph containing entities mentioned 1 Introduction in the text and the semantically relevant relations among them. As shown in the motivating exam- Nowadays, unprecedented amounts of online mis- ple in Figure1, even though machine-generated information (e.g., fake news and rumors) spread text seems coherent, its factual structure is incon- through the internet, which may misinform peo- sistent. Our statistical analysis further reveals the ple’s opinions of essential social events (Faris et al., difference in the factual structure between human- 2017; Thorne et al., 2018; Goodrich et al., 2019; written and machine-generated text (detailed in Krysci´ nski´ et al., 2019). Recent advances in neural Section3). Thus, modeling factual structures is generative models, such as GPT-2 (Radford et al., essential for detecting machine-generated text. ∗ Work done while this author was an intern at Microsoft Based on the aforementioned analysis, we pro- Research. pose FAST, a graph-based reasoning approach uti- 2461 Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pages 2461–2470, November 16–20, 2020. c 2020 Association for Computational Linguistics lizing FActual Structure of Text for deepfake detec- content of the document and assess whether it is tion. With a given document, we represent its fac- “human-written” or “machine-generated”. tual structure as a graph, where nodes are automat- ically extracted by named entity recognition. Node 3 Factual Consistency Verification representations are calculated not only with the in- In this part, we conduct a statistical analysis to re- ternal factual structure of a document via a graph veal the difference in the factual structure between convolution network, but also with external knowl- human-written and machine-generated text. Specif- edge from entity representations pre-trained on ically, we study the difference in factual structures Wikipedia. These node representations are fed to from a consistency perspective and analyze entity- produce sentence representations which, together level and sentence-level consistency. with the coherence of continuous sentences, are Through data observation, we find that human- further used to compose a document representation written text tends to repeatedly mention the same for making the final prediction. entity in continuous sentences, while machine- We conduct experiments on a news-style dataset written continuous sentences are more likely to and a webtext-style dataset, with negative instances mention irrelevant entities. Therefore, we define generated by GROVER (Zellers et al., 2019) and entity consistency count (ECC) of a document GPT-2 (Radford et al., 2019) respectively. Exper- as the number of entities that are repeatedly men- iments show that our method significantly outper- tioned in the next w sentences, where w is the sen- forms strong transformer-based baselines on both tence window size. Sentence consistency count datasets. Model analysis further indicates that our (SCC) of a document is defined as the number of model can distinguish the difference in the fac- sentences that mention the same entities with the tual structure between machine-generated text and next w sentences. For instance, if entities men- human-written text. The contributions are summa- tioned in three continuous sentences are “A and B; rized as follows: A; B” and w = 2, then ECC = 2 because two • We propose a graph-based approach, which entities A and B are repeatedly mentioned in the models the fine-grained factual structure of a next 2 sentences. SCC = 1 because only the first document for deepfake detection of text. sentence has entities mentioned in the next 2 sen- tences. We use all 5,000 pairs of human-written • We statistically show that machine-generated and machine-generated documents from the news- text differs from human-written text in terms style dataset and each pair of documents share the of the factual structures, and injecting factual same metadata (e.g., title) for statistical analysis. structures boosts detection accuracy. We plot the kernel density distribution of these two • Results of experiments on news-style and types of consistency count with sentence window webtext-style datasets verify that our approach size w = f1; 2g. achieves improved accuracy compared to As shown in Figure2, human-written documents strong transformer-based pre-trained models. are more likely to have higher entity-level and sentence-level consistency count. This analysis in- 2 Task Definition dicates that human-written and machine-generated text are different in the factual structure, thus mod- We study the task of deepfake detection of text eling consistency of factual structures is essential in this paper. This task discriminates machine- in discriminating them. generated text from human-written text, which can be viewed as a binary classification problem. We 4 Methodology conduct our experiments on two datasets with dif- ferent styles: a news-style dataset with fake text In this section, we present our graph-based reason- generated by GROVER (Zellers et al., 2019) and a ing approach, which considers factual structures of large-scale webtext-style dataset with fake text gen- documents, which is used to guide the reasoning erated by GPT-2 (Radford et al., 2019). The news- process for the final prediction. style dataset consists of 25,000 labeled documents, Figure3 gives a high-level overview of our ap- and the webtext-style dataset consists of 520,000 proach. With a document given as the input, our labeled documents. With a given document, sys- system begins by calculating the contextual word tems are required to perform reasoning about the representations by RoBERTa (x 4.1). Then, we 2462 Entity Consistency Count Sentence Consistency Count ment. In practice, we observe that selecting enti- ties, the core participants of events, as arguments 1 = to construct the graph leads to less noise to the rep- ���� resentation of the factual structure. Therefore, we density density ������ ����������� ����� employ a named entity recognition (NER) model to ������� parse entities mentioned in each sentence. Specifi- entity consistency count e sentence consistency count cally, taking a document as the input, we construct 2 a graph in the following steps. = ���� • We parse each sentence to a set of entities density density with an off-the-shelf NER toolkit built by Al- ������� lenNLP 2, which is an implementation of Pe- entity consistency count sentence consistency count ters et al.(2017).
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-