Curriculum Learning for Natural Language Understanding Benfeng Xu1∗, Licheng Zhang1∗ , Zhendong Mao1†, Quan Wang2 , Hongtao Xie1 and Yongdong Zhang1

Total Page:16

File Type:pdf, Size:1020Kb

Curriculum Learning for Natural Language Understanding Benfeng Xu1∗, Licheng Zhang1∗ , Zhendong Mao1†, Quan Wang2 , Hongtao Xie1 and Yongdong Zhang1 Curriculum Learning for Natural Language Understanding Benfeng Xu1∗, Licheng Zhang1∗ , Zhendong Mao1y, Quan Wang2 , Hongtao Xie1 and Yongdong Zhang1 1School of Information Science and Technology, University of Science and Technology of China, Hefei, China 2Beijing Research Institute, University of Science and Technology of China, Beijing, China fbenfeng,[email protected], [email protected] fzdmao,htxie,[email protected] Abstract Easy cases: easy, comfortable positive With the great success of pre-trained lan- most purely enjoyable positive guage models, the pretrain-finetune paradigm most plain, unimaginative negative badly edited negative now becomes the undoubtedly dominant solu- tion for natural language understanding (NLU) Hard cases: why didn’t Hollywood think of this sooner positive tasks. At the fine-tune stage, target task data I simply can’t recommend it enough positive is usually introduced in a completely random supposedly funny movie negative order and treated equally. However, examples occasionally interesting negative in NLU tasks can vary greatly in difficulty, and similar to human learning procedure, language Table 1: Examples from SST-2 sentiment classification models can benefit from an easy-to-difficult task. Difficulty levels are determined by our review curriculum. Based on this idea, we propose method (detailed later). our Curriculum Learning approach. By re- viewing the trainset in a crossed way, we are able to distinguish easy examples from diffi- cult ones, and arrange a curriculum for lan- Most current approaches perform fine-tuning in a guage models. Without any manual model ar- straightforward manner, i.e., all training examples chitecture design or use of external data, our are treated equally and presented in a completely Curriculum Learning approach obtains signifi- random order during training. However, even in the cant and universal performance improvements same NLU task, the training examples could vary on a wide range of NLU tasks. significantly in their difficulty levels, with some easily solvable by simple lexical clues while others 1 Introduction requiring sophisticated reasoning. Table1 shows Natural Language Understanding (NLU), which re- some examples from the SST-2 sentiment classifi- quires machines to understand and reason with hu- cation task (Socher et al., 2013), which identifies man language, is a crucial yet challenging problem. sentiment polarities (positive or negative) of movie Recently, language model (LM) pre-training has reviews. The easy cases can be solved directly by achieved remarkable success in NLU. Pre-trained identifying sentiment words such as “comfortable” LMs learn universal language representations from and “unimaginative”, while the hard ones further large-scale unlabeled data, and can be simply fine- require reasoning with negations or verb qualifiers tuned with a few adjustments to adapt to various like “supposedly” and “occasionally”. Extensive NLU tasks, showing consistent and significant im- research suggests that presenting training examples provements in these tasks (Radford et al., 2018; in a meaningful order, starting from easy ones and Devlin et al., 2018). gradually moving on to hard ones, would benefit While lots of attention has been devoted to de- the learning process, not only for humans but also signing better pre-training strategies (Yang et al., for machines (Skinner, 1958; Elman, 1993; Peter- 2019; Liu et al., 2019; Raffel et al., 2019), it is also son, 2004; Krueger and Dayan, 2009). valuable to explore how to more effectively solve Such an organization of learning materials in downstream NLU tasks in the fine-tuning stage. human learning procedure is usually referred to ∗Equal contribution. as Curriculum. In this paper, we draw inspira- y Corresponding author. tion from similar ideas, and propose our approach 6095 Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6095–6104 July 5 - 10, 2020. c 2020 Association for Computational Linguistics for arranging a curriculum when learning NLU architecture design or use of external data, we are tasks. Curriculum Learning (CL) is first proposed able to obtain significant and universal improve- by (Bengio et al., 2009) in machine learning area, ments on a wide range of downstream NLU tasks. where the definition of easy examples is established Our contributions can be concluded as follows: ahead, and an easy-to-difficult curriculum is ar- ranged accordingly for the learning procedure. Re- • We explore and demonstrate the effectiveness cent developments have successfully applied CL of CL in the context of finetuning LM on NLU in computer vision areas (Jiang et al., 2017; Guo tasks. To the best of our knowledge, this is one et al., 2018; Hacohen and Weinshall, 2019). It is of the first times that CL strategy is proved to observed in these works that by excluding the neg- be extensively prospective in learning NLU ative impact of difficult or even noisy examples tasks. in early training stage, an appropriate CL strategy • We propose a novel CL framework that con- can guide learning towards a better local minima in sists of a Difficulty Review method and a parameter space, especially for highly non-convex Curriculum Arrangement algorithm, which deep models. We argue that language models like requires no human pre-design and is very gen- transformer, which is hard to train (Popel and Bojar, eralizable to a lot of given tasks. 2018), should also benefit from CL in the context of learning NLU tasks, and such idea still remains • We obtain universal performance gain on a unexplored. wide range of NLU tasks including Machine The key challenge in designing a successful CL Reading Comprehension (MRC) and Natural strategy lies in how to define easy/difficult exam- Language Inference. The improvements are ples. One straightforward way is to simply pre- especially significant on tasks that are more define the difficulty in revised rules by observing challenging. the particular target task formation or training data structure accordingly (Guo et al., 2018; Platanios 2 Preliminaries et al., 2019; Tay et al., 2019). For example, (Ben- We describe our CL approach using BERT (De- gio et al., 2009) utilized an easier version of shape vlin et al., 2018), the most influential pre-trained recognition trainset which comprised of less varied LM that achieved state-of-the-art results on a wide shapes, before the training of complex one started. range of NLP tasks. BERT is pretrained using More recently, (Tay et al., 2019) considered the Masked Language Model task and Next sentence paragraph length of a question answering example Prediction task via large scale corpora. It consists as its reflection of difficulty. However, such strate- of a hierarchical stack of l self-attention layers, gies are highly dependent on the target dataset itself which takes an input of a sequence with no more and often fails to generalize to different tasks. than 512 tokens and output the contextual repre- To address this challenge, we propose our Cross sentation of a H-dimension vector for each token Review method for evaluating difficulty. Specifi- l H in position i, which we denote as hi 2 R . In cally, we define easy examples as those well solved natural language understanding tasks, the input se- by the exact model that we are to employ in the quences usually start with special token hCLSi, and task. For different tasks, we adopt their correspond- end with hSEPi, for sequences consisting of two ing golden metrics to calculate a difficulty score for segments like in pairwise sentence tasks, another each example in the trainset. Then based on these hSEPi is added in between for separating usage. difficulty scores, we further design a re-arranging For target benchmarks, we employ a wide range algorithm to construct the learning curriculum in of NLU tasks, including machine reading compre- an annealing style, which provides a soft transition hension, sequence classification and pairwise text from easy to difficult for the model. In general, our similarity, etc.. Following (Devlin et al., 2018), we CL approach is not constrained to any particular adapt BERT for NLU tasks in the most straightfor- task, and does not rely on human prior heuristics ward way: simply add one necessary linear layer about the task or dataset. upon the final hidden outputs, then finetune the Experimental results show that our CL approach entire model altogether. Specifically, we brief the can greatly help language models learn in their configurations and corresponding metrics for dif- finetune stage. Without any task-tailored model ferent tasks employed in our algorithms as follows: 6096 Machine Reading Comprehension In this work we consider the extractive MRC task. Given a passage P and a corresponding question Q, the goal is to extract a continuous span hpstart; pendi from P as the answer A, where the start and end are its boundaries. We pass the concatenation of the question and paragraph [hCLSi; Q; hSEPi; P; hSEPi] to the pre- trained LM and use a linear classifier on top of it to predict the answer span boundaries. Figure 1: Our Cross Review method: the target dataset For the i − th input token, the probabilities that is split into N meta-datasets, after the teachers are trained on them, each example will be inferenced by all it is the start or end are calculated as: other teachers (except the one it belongs to), the scores start end T T l will be summed as the final evaluation results. [logiti ; logiti ] = WMRC hi start start pi = softmax(flogiti g) end end 3 Our CL Approach pi = softmax(flogiti g) T 2×H where WMRC 2 R is a trainable matrix. The We decompose our CL framework into two stages: training objective is the log-likelihood of the true Difficulty Evaluation and Curriculum Arrange- start and end positions ystart and yend: ment. For any target task, let D be the examples start end set used for training, and Θ be our language model loss = −(log(py ) + log(py )) start end which is expected to fit D.
Recommended publications
  • NVIDIA CEO Jensen Huang to Host AI Pioneers Yoshua Bengio, Geoffrey Hinton and Yann Lecun, and Others, at GTC21
    NVIDIA CEO Jensen Huang to Host AI Pioneers Yoshua Bengio, Geoffrey Hinton and Yann LeCun, and Others, at GTC21 Online Conference to Feature Jensen Huang Keynote and 1,300 Talks from Leaders in Data Center, Networking, Graphics and Autonomous Vehicles NVIDIA today announced that its CEO and founder Jensen Huang will host renowned AI pioneers Yoshua Bengio, Geoffrey Hinton and Yann LeCun at the company’s upcoming technology conference, GTC21, running April 12-16. The event will kick off with a news-filled livestreamed keynote by Huang on April 12 at 8:30 am Pacific. Bengio, Hinton and LeCun won the 2018 ACM Turing Award, known as the Nobel Prize of computing, for breakthroughs that enabled the deep learning revolution. Their work underpins the proliferation of AI technologies now being adopted around the world, from natural language processing to autonomous machines. Bengio is a professor at the University of Montreal and head of Mila - Quebec Artificial Intelligence Institute; Hinton is a professor at the University of Toronto and a researcher at Google; and LeCun is a professor at New York University and chief AI scientist at Facebook. More than 100,000 developers, business leaders, creatives and others are expected to register for GTC, including CxOs and IT professionals focused on data center infrastructure. Registration is free and is not required to view the keynote. In addition to the three Turing winners, major speakers include: Girish Bablani, Corporate Vice President, Microsoft Azure John Bowman, Director of Data Science, Walmart
    [Show full text]
  • On Recurrent and Deep Neural Networks
    On Recurrent and Deep Neural Networks Razvan Pascanu Advisor: Yoshua Bengio PhD Defence Universit´ede Montr´eal,LISA lab September 2014 Pascanu On Recurrent and Deep Neural Networks 1/ 38 Studying the mechanism behind learning provides a meta-solution for solving tasks. Motivation \A computer once beat me at chess, but it was no match for me at kick boxing" | Emo Phillips Pascanu On Recurrent and Deep Neural Networks 2/ 38 Motivation \A computer once beat me at chess, but it was no match for me at kick boxing" | Emo Phillips Studying the mechanism behind learning provides a meta-solution for solving tasks. Pascanu On Recurrent and Deep Neural Networks 2/ 38 I fθ(x) = f (θ; x) ? F I f = arg minθ Θ EEx;t π [d(fθ(x); t)] 2 ∼ Supervised Learing I f :Θ D T F × ! Pascanu On Recurrent and Deep Neural Networks 3/ 38 ? I f = arg minθ Θ EEx;t π [d(fθ(x); t)] 2 ∼ Supervised Learing I f :Θ D T F × ! I fθ(x) = f (θ; x) F Pascanu On Recurrent and Deep Neural Networks 3/ 38 Supervised Learing I f :Θ D T F × ! I fθ(x) = f (θ; x) ? F I f = arg minθ Θ EEx;t π [d(fθ(x); t)] 2 ∼ Pascanu On Recurrent and Deep Neural Networks 3/ 38 Optimization for learning θ[k+1] θ[k] Pascanu On Recurrent and Deep Neural Networks 4/ 38 Neural networks Output neurons Last hidden layer bias = 1 Second hidden layer First hidden layer Input layer Pascanu On Recurrent and Deep Neural Networks 5/ 38 Recurrent neural networks Output neurons Output neurons Last hidden layer bias = 1 bias = 1 Recurrent Layer Second hidden layer First hidden layer Input layer Input layer (b) Recurrent
    [Show full text]
  • Yoshua Bengio and Gary Marcus on the Best Way Forward for AI
    Yoshua Bengio and Gary Marcus on the Best Way Forward for AI Transcript of the 23 December 2019 AI Debate, hosted at Mila Moderated and transcribed by Vincent Boucher, Montreal AI AI DEBATE : Yoshua Bengio | Gary Marcus — Organized by MONTREAL.AI and hosted at Mila, on Monday, December 23, 2019, from 6:30 PM to 8:30 PM (EST) Slides, video, readings and more can be found on the MONTREAL.AI debate webpage. Transcript of the AI Debate Opening Address | Vincent Boucher Good Evening from Mila in Montreal Ladies & Gentlemen, Welcome to the “AI Debate”. I am Vincent Boucher, Founding Chairman of Montreal.AI. Our participants tonight are Professor GARY MARCUS and Professor YOSHUA BENGIO. Professor GARY MARCUS is a Scientist, Best-Selling Author, and Entrepreneur. Professor MARCUS has published extensively in neuroscience, genetics, linguistics, evolutionary psychology and artificial intelligence and is perhaps the youngest Professor Emeritus at NYU. He is Founder and CEO of Robust.AI and the author of five books, including The Algebraic Mind. His newest book, Rebooting AI: Building Machines We Can Trust, aims to shake up the field of artificial intelligence and has been praised by Noam Chomsky, Steven Pinker and Garry Kasparov. Professor YOSHUA BENGIO is a Deep Learning Pioneer. In 2018, Professor BENGIO was the computer scientist who collected the largest number of new citations worldwide. In 2019, he received, jointly with Geoffrey Hinton and Yann LeCun, the ACM A.M. Turing Award — “the Nobel Prize of Computing”. He is the Founder and Scientific Director of Mila — the largest university-based research group in deep learning in the world.
    [Show full text]
  • Hello, It's GPT-2
    Hello, It’s GPT-2 - How Can I Help You? Towards the Use of Pretrained Language Models for Task-Oriented Dialogue Systems Paweł Budzianowski1;2;3 and Ivan Vulic´2;3 1Engineering Department, Cambridge University, UK 2Language Technology Lab, Cambridge University, UK 3PolyAI Limited, London, UK [email protected], [email protected] Abstract (Young et al., 2013). On the other hand, open- domain conversational bots (Li et al., 2017; Serban Data scarcity is a long-standing and crucial et al., 2017) can leverage large amounts of freely challenge that hinders quick development of available unannotated data (Ritter et al., 2010; task-oriented dialogue systems across multiple domains: task-oriented dialogue models are Henderson et al., 2019a). Large corpora allow expected to learn grammar, syntax, dialogue for training end-to-end neural models, which typ- reasoning, decision making, and language gen- ically rely on sequence-to-sequence architectures eration from absurdly small amounts of task- (Sutskever et al., 2014). Although highly data- specific data. In this paper, we demonstrate driven, such systems are prone to producing unre- that recent progress in language modeling pre- liable and meaningless responses, which impedes training and transfer learning shows promise their deployment in the actual conversational ap- to overcome this problem. We propose a task- oriented dialogue model that operates solely plications (Li et al., 2017). on text input: it effectively bypasses ex- Due to the unresolved issues with the end-to- plicit policy and language generation modules. end architectures, the focus has been extended to Building on top of the TransferTransfo frame- retrieval-based models.
    [Show full text]
  • Hierarchical Multiscale Recurrent Neural Networks
    Published as a conference paper at ICLR 2017 HIERARCHICAL MULTISCALE RECURRENT NEURAL NETWORKS Junyoung Chung, Sungjin Ahn & Yoshua Bengio ∗ Département d’informatique et de recherche opérationnelle Université de Montréal {junyoung.chung,sungjin.ahn,yoshua.bengio}@umontreal.ca ABSTRACT Learning both hierarchical and temporal representation has been among the long- standing challenges of recurrent neural networks. Multiscale recurrent neural networks have been considered as a promising approach to resolve this issue, yet there has been a lack of empirical evidence showing that this type of models can actually capture the temporal dependencies by discovering the latent hierarchical structure of the sequence. In this paper, we propose a novel multiscale approach, called the hierarchical multiscale recurrent neural network, that can capture the latent hierarchical structure in the sequence by encoding the temporal dependencies with different timescales using a novel update mechanism. We show some evidence that the proposed model can discover underlying hierarchical structure in the sequences without using explicit boundary information. We evaluate our proposed model on character-level language modelling and handwriting sequence generation. 1 INTRODUCTION One of the key principles of learning in deep neural networks as well as in the human brain is to obtain a hierarchical representation with increasing levels of abstraction (Bengio, 2009; LeCun et al., 2015; Schmidhuber, 2015). A stack of representation layers, learned from the data in a way to optimize the target task, make deep neural networks entertain advantages such as generalization to unseen examples (Hoffman et al., 2013), sharing learned knowledge among multiple tasks, and discovering disentangling factors of variation (Kingma & Welling, 2013).
    [Show full text]
  • I2t2i: Learning Text to Image Synthesis with Textual Data Augmentation
    I2T2I: LEARNING TEXT TO IMAGE SYNTHESIS WITH TEXTUAL DATA AUGMENTATION Hao Dong, Jingqing Zhang, Douglas McIlwraith, Yike Guo Data Science Institute, Imperial College London ABSTRACT of text-to-image synthesis is still far from being solved. Translating information between text and image is a funda- GAN-CLS I2T2I mental problem in artificial intelligence that connects natural language processing and computer vision. In the past few A yellow years, performance in image caption generation has seen sig- school nificant improvement through the adoption of recurrent neural bus parked in networks (RNN). Meanwhile, text-to-image generation begun a parking to generate plausible images using datasets of specific cate- lot. gories like birds and flowers. We’ve even seen image genera- A man tion from multi-category datasets such as the Microsoft Com- swinging a baseball mon Objects in Context (MSCOCO) through the use of gen- bat over erative adversarial networks (GANs). Synthesizing objects home plate. with a complex shape, however, is still challenging. For ex- ample, animals and humans have many degrees of freedom, which means that they can take on many complex shapes. We propose a new training method called Image-Text-Image Fig. 1: Comparing text-to-image synthesis with and without (I2T2I) which integrates text-to-image and image-to-text (im- textual data augmentation. I2T2I synthesizes the human pose age captioning) synthesis to improve the performance of text- and bus color better. to-image synthesis. We demonstrate that I2T2I can generate better multi-categories images using MSCOCO than the state- Recently, both fractionally-strided convolutional and con- of-the-art.
    [Show full text]
  • Yoshua Bengio
    Yoshua Bengio Département d’informatique et recherche opérationnelle, Université de Montréal Canada Research Chair Phone : 514-343-6804 on Statistical Learning Algorithms Fax : 514-343-5834 [email protected] www.iro.umontreal.ca/∼bengioy Titles and Distinctions • Full Professor, Université de Montréal, since 2002. Previously associate professor (1997-2002) and assistant professor (1993-1997). • Canada Research Chair on Statistical Learning Algorithms since 2000 (tier 2 : 2000-2005, tier 1 since 2006). • NSERC-Ubisoft Industrial Research Chair, since 2011. Previously NSERC- CGI Industrial Chair, 2005-2010. • Recipient of the ACFAS Urgel-Archambault 2009 prize (covering physics, ma- thematics, computer science, and engineering). • Fellow, CIFAR (Canadian Institute For Advanced Research), since 2004. Co- director of the CIFAR NCAP program since 2014. • Member of the board of the Neural Information Processing Systems (NIPS) Foundation, since 2010. • Action Editor, Journal of Machine Learning Research (JMLR), Neural Compu- tation, Foundations and Trends in Machine Learning, and Computational Intelligence. Member of the 2012 editor-in-chief nominating committee for JMLR. • Fellow, CIRANO (Centre Inter-universitaire de Recherche en Analyse des Organi- sations), since 1997. • Previously Associate Editor, Machine Learning, IEEE Trans. on Neural Net- works. • Founder (1993) and head of the Laboratoire d’Informatique des Systèmes Adaptatifs (LISA), and the Montreal Institute for Learning Algorithms (MILA), currently including 5 faculty, 40 students, 5 post-docs, 5 researchers on staff, and numerous visitors. This is probably the largest concentration of deep learning researchers in the world (around 60 researchers). • Member of the board of the Centre de Recherches Mathématiques, UdeM, 1999- 2009. Member of the Awards Committee of the Canadian Association for Computer Science (2012-2013).
    [Show full text]
  • Generalized Denoising Auto-Encoders As Generative Models
    Generalized Denoising Auto-Encoders as Generative Models Yoshua Bengio, Li Yao, Guillaume Alain, and Pascal Vincent Departement´ d’informatique et recherche operationnelle,´ Universite´ de Montreal´ Abstract Recent work has shown how denoising and contractive autoencoders implicitly capture the structure of the data-generating density, in the case where the cor- ruption noise is Gaussian, the reconstruction error is the squared error, and the data is continuous-valued. This has led to various proposals for sampling from this implicitly learned density function, using Langevin and Metropolis-Hastings MCMC. However, it remained unclear how to connect the training procedure of regularized auto-encoders to the implicit estimation of the underlying data- generating distribution when the data are discrete, or using other forms of corrup- tion process and reconstruction errors. Another issue is the mathematical justifi- cation which is only valid in the limit of small corruption noise. We propose here a different attack on the problem, which deals with all these issues: arbitrary (but noisy enough) corruption, arbitrary reconstruction loss (seen as a log-likelihood), handling both discrete and continuous-valued variables, and removing the bias due to non-infinitesimal corruption noise (or non-infinitesimal contractive penalty). 1 Introduction Auto-encoders learn an encoder function from input to representation and a decoder function back from representation to input space, such that the reconstruction (composition of encoder and de- coder) is good for training examples. Regularized auto-encoders also involve some form of regu- larization that prevents the auto-encoder from simply learning the identity function, so that recon- struction error will be low at training examples (and hopefully at test examples) but high in general.
    [Show full text]
  • Exposing GAN-Synthesized Faces Using Landmark Locations
    Exposing GAN-synthesized Faces Using Landmark Locations Xin Yang∗, Yuezun Li∗, Honggang Qiy, Siwei Lyu∗ ∗ Computer Science Department, University at Albany, State University of New York, USA y School of Computer and Control Engineering, University of the Chinese Academy of Sciences, China ABSTRACT Unlike previous image/video manipulation methods, realistic Generative adversary networks (GANs) have recently led to highly images are generated completely from random noise through a realistic image synthesis results. In this work, we describe a new deep neural network. Current detection methods are based on low method to expose GAN-synthesized images using the locations of level features such as color disparities [10, 13], or using the whole the facial landmark points. Our method is based on the observa- image as input to a neural network to extract holistic features [19]. tions that the facial parts configuration generated by GAN models In this work, we develop a new GAN-synthesized face detection are different from those of the real faces, due to the lack of global method based on a more semantically meaningful features, namely constraints. We perform experiments demonstrating this phenome- the locations of facial landmark points. This is because the GAN- non, and show that an SVM classifier trained using the locations of synthesized faces exhibit certain abnormality in the facial landmark facial landmark points is sufficient to achieve good classification locations. Specifically, The GAN-based face synthesis algorithm performance for GAN-synthesized faces. can generate face parts (e.g., eyes, nose, skin, and mouth, etc) with a great level of realistic details, yet it does not have an explicit KEYWORDS constraint over the locations of these parts in a face.
    [Show full text]
  • Extracting and Composing Robust Features with Denoising Autoencoders
    Extracting and Composing Robust Features with Denoising Autoencoders Pascal Vincent, Hugo Larochelle, Yoshua Bengio, Pierre-Antoine Manzagol Dept. IRO, Universit´ede Montr´eal C.P. 6128, Montreal, Qc, H3C 3J7, Canada http://www.iro.umontreal.ca/∼lisa Technical Report 1316, February 2008 Abstract Previous work has shown that the difficulties in learning deep genera- tive or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representa- tions robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite. 1 Introduction Recent theoretical studies indicate that deep architectures (Bengio & Le Cun, 2007; Bengio, 2007) may be needed to efficiently model complex distributions and achieve better generalization performance on challenging recognition tasks. The belief that additional levels of functional composition will yield increased representational and modeling power is not new (McClelland et al., 1986; Hin- ton, 1989; Utgoff & Stracuzzi, 2002). However, in practice, learning in deep architectures has proven to be difficult. One needs only to ponder the diffi- cult problem of inference in deep directed graphical models, due to “explaining away”. Also looking back at the history of multi-layer neural networks, their difficult optimization (Bengio et al., 2007; Bengio, 2007) has long prevented reaping the expected benefits of going beyond one or two hidden layers.
    [Show full text]
  • The Creation and Detection of Deepfakes: a Survey
    1 The Creation and Detection of Deepfakes: A Survey YISROEL MIRSKY∗, Georgia Institute of Technology and Ben-Gurion University WENKE LEE, Georgia Institute of Technology Generative deep learning algorithms have progressed to a point where it is dicult to tell the dierence between what is real and what is fake. In 2018, it was discovered how easy it is to use this technology for unethical and malicious applications, such as the spread of misinformation, impersonation of political leaders, and the defamation of innocent individuals. Since then, these ‘deepfakes’ have advanced signicantly. In this paper, we explore the creation and detection of deepfakes an provide an in-depth view how these architectures work. e purpose of this survey is to provide the reader with a deeper understanding of (1) how deepfakes are created and detected, (2) the current trends and advancements in this domain, (3) the shortcomings of the current defense solutions, and (4) the areas which require further research and aention. CCS Concepts: •Security and privacy ! Social engineering attacks; Human and societal aspects of security and privacy; •Computing methodologies ! Machine learning; Additional Key Words and Phrases: Deepfake, Deep fake, reenactment, replacement, face swap, generative AI, social engineering, impersonation ACM Reference format: Yisroel Mirsky and Wenke Lee. 2020. e Creation and Detection of Deepfakes: A Survey. ACM Comput. Surv. 1, 1, Article 1 (January 2020), 38 pages. DOI: XX.XXXX/XXXXXXX.XXXXXXX 1 INTRODUCTION A deepfake is content, generated by an articial intelligence, that is authentic in the eyes of a human being. e word deepfake is a combination of the words ‘deep learning’ and ‘fake’ and primarily relates to content generated by an articial neural network, a branch of machine learning.
    [Show full text]
  • Unsupervised Pretraining, Autoencoder and Manifolds
    Unsupervised Pretraining, Autoencoder and Manifolds Christian Herta Outline ● Autoencoders ● Unsupervised pretraining of deep networks with autoencoders ● Manifold-Hypotheses Autoencoder Problems of training of deep neural networks ● stochastic gradient descent + standard algorithm "Backpropagation": vanishing or exploding gradient: "Vanishing Gradient Problem" [Hochreiter 1991] ● only shallow nets are trainable => feature engineering ● for applications (in the past): most only one layer Solutions for training deep nets ● layer wise pretraining (first by [Hin06] with RBM) – with unlabeled data (unsupervised pretraining) ● Restricted Boltzmann Machines (BM) ● Stacked autoencoder ● Contrastive estimation ● more effective optimization – second order methods, like "Hessian free Optimization" ● more carefully initialization + other neuron types (e.g. linear rectified/maxout) + dropout+ more sophisticated momentum (e.g. nesterov momentum); see e.g. [Glo11] Representation Learning ● "Feature Learning" statt "Feature Engineering" ● Multi Task Learning: – learned Features (distributed representations) can be used for different tasks – unsupervised pretraining + supervised finetuning out task A out task B out task C hidden hidden Input from Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, Samy Bengio; Why Does Unsupervised Pre-training Help Deep Learning? JMLR2010 Layer wise pretraining with autoencoders Autoencoder ● Goal: encoder decoder reconstruction of the input input = output ● different constraints on
    [Show full text]