Journal of Machine Learning Research 22 (2021) 1-41 Submitted 4/20; Revised 10/20; Published 1/21 Residual Energy-Based Models for Text Anton Bakhtin∗
[email protected] Facebook AI Research 770 Broadway, New York, NY 10003 U.S.A. Yuntian Deng∗
[email protected] Harvard University 33 Oxford St., Cambridge, MA 02138 U.S.A. Sam Gross
[email protected] Myle Ott
[email protected] Marc’Aurelio Ranzato
[email protected] Arthur Szlam
[email protected] Editor: Samy Bengio Abstract Current large-scale auto-regressive language models (Radford et al., 2019; Liu et al., 2018; Graves, 2013) display impressive fluency and can generate convincing text. In this work we start by asking the question: Can the generations of these models be reliably distinguished from real text by statistical discriminators? We find experimentally that the answer is affirmative when we have access to the training data for the model, and guardedly affirmative even if we do not. This suggests that the auto-regressive models can be improved by incorporating the (globally normalized) discriminators into the generative process. We give a formalism for this using the Energy-Based Model framework, and show that it indeed improves the results of the generative models, measured both in terms of perplexity and in terms of human evaluation. Keywords: energy-based models, text generation, negative sampling, importance sampling, generalization, real/fake discrimination 1. Introduction Energy-based models (EBMs) have a long history in machine learning (Hopfield, 1982; Hinton, 2002a; LeCun et al., 2006), especially in the image domain (Teh et al., 2003; Ranzato et al., 2013).