An Exploration of Word Embedding Initialization in Deep-Learning Tasks

Total Page:16

File Type:pdf, Size:1020Kb

An Exploration of Word Embedding Initialization in Deep-Learning Tasks An Exploration of Word Embedding Initialization in Deep-Learning Tasks Tom Kocmi and Ondrejˇ Bojar Charles University, Faculty of Mathematics and Physics Institute of Formal and Applied Linguistics surname@ufal.mff.cuni.cz Abstract 2016) in one language or cross-lingually (Luong et al., 2015). Word embeddings are the interface be- Embeddings are trained for a task. In other tween the world of discrete units of text words, the vectors that embeddings assign to each processing and the continuous, differen- word type are almost never provided manually but tiable world of neural networks. In this always discovered automatically in a neural net- work, we examine various random and work trained to carry out a particular task. The pretrained initialization methods for em- well known embeddings are those by Mikolov et beddings used in deep networks and their al. (2013), where the task is to predict the word effect on the performance on four NLP from its neighboring words (CBOW) or the neigh- tasks with both recurrent and convolu- bors from the given word (Skip-gram). Trained tional architectures. We confirm that pre- on a huge corpus, these “Word2Vec” embeddings trained embeddings are a little better than show an interesting correspondence between lexi- random initialization, especially consider- cal relations and arithmetic operations in the vec- ing the speed of learning. On the other tor space. The most famous example is the follow- hand, we do not see any significant dif- ing: ference between various methods of ran- dom initialization, as long as the variance v(king) − v(man) + v(woman) ≈ v(queen) is kept reasonably low. High-variance ini- tialization prevents the network to use the In other words, adding the vectors associated with space of embeddings and forces it to use the words ‘king’ and ‘woman’ while subtracting other free parameters to accomplish the ‘man’ should be equal to the vector associated task. We support this hypothesis by ob- with the word ‘queen’. We can also say that serving the performance in learning lexical the difference vectors v(king) − v(queen) and relations and by the fact that the network v(man) − v(woman) are almost identical and de- can learn to perform reasonably in its task scribe the gender relationship. even with fixed random embeddings. Word2Vec is not trained with a goal of proper representation of relationships, therefore the ab- 1 Introduction arXiv:1711.09160v1 [cs.CL] 24 Nov 2017 solute accuracy scores around 50% do not allow Embeddings or lookup tables (Bengio et al., 2003) to rely on these relation predictions. Still, it is a are used for units of different granularity, from rather interesting property observed empirically in characters (Lee et al., 2016) to subword units the learned space. Another extensive study of em- (Sennrich et al., 2016; Wu et al., 2016) up to bedding space has been conducted by Hill et al. words. In this paper, we focus solely on word (2017). embeddings (embeddings attached to individual Word2Vec embeddings as well as GloVe em- token types in the text). In their highly dimen- beddings (Pennington et al., 2014) became very sional vector space, word embeddings are capa- popular and they were tested in many tasks, also ble of representing many aspects of similarities be- because for English they can be simply down- tween words: semantic relations or morphological loaded as pretrained on huge corpora. Word2Vec properties (Mikolov et al., 2013; Kocmi and Bojar, was trained on 100 billion words Google News dataset1 and GloVe embeddings were trained on on the same task and pretrained on a standard task 6 billion words from the Wikipedia. Sometimes, such as Word2Vec or GloVe. they are used as a fixed mapping for a better Random initialization methods common in the robustness of the system (Kenter and De Rijke, literature2 sample values either uniformly from a 2015), but they are more often used to seed the em- fixed interval centered at zero or, more often, from beddings in a system and they are further trained a zero-mean normal distribution with the standard in the particular end-to-end application (Collobert deviation varying from 0.001 to 10. et al., 2011; Lample et al., 2016). The parameters of the distribution can be set In practice, random initialization of embeddings empirically or calculated based on some assump- is still more common than using pretrained embed- tions about the training of the network. The sec- dings and it should be noted that pretrained em- ond approach has been done for various hidden beddings are not always better than random ini- layer initializations (i.e. not the embedding layer). tialization (Dhingra et al., 2017). E.g. Glorot and Bengio (2010) and He et al. (2015) We are not aware of any study of the effects of argue that sustaining variance of values thorough various random embeddings initializations on the the whole network leads to the best results and training performance. define the parameters for initialization so that the In the first part of the paper, we explore various layer weights W have the same variance of output English word embeddings initializations in four as is the variance of the input. tasks: neural machine translation (denoted MT in Glorot and Bengio (2010) define the “Xavier” the following for short), language modeling (LM), initialization method. They suppose a linear neu- part-of-speech tagging (TAG) and lemmatization ral network for which they derive weights initial- (LEM), covering both common styles of neural ar- ization as chitectures: the recurrent and convolutional neural p p 6 6 networks, RNN and CNN, resp. W ∼ U − p ; p (1) ni + no ni + no In the second part, we explore the obtained em- beddings spaces in an attempt to better understand where ni is the size of the input and no is the size the networks have learned about word relations. of the output. The initialization for nonlinear net- works using ReLu units has been derived similarly 2 Embeddings initialization by He et al. (2015) as 2 Given a vocabulary V of words, embeddings rep- W ∼ N (0; ) (2) resent each word as a dense vector of size d (as ni opposed to “one-hot” representation where each The same assumption of sustaining variance can- word would be represented as a sparse vector of not be applied to embeddings because there is no size jV j with all zeros except for one element in- input signal whose variance should be sustained dicating the given word). Formally, embeddings to the output. We nevertheless try these initializa- jV |×d are stored in a matrix E 2 R . tion as well, denoting them Xavier and He, respec- For a given word type w 2 V , a row is se- tively. lected from E. Thus, E is often referred to as word lookup table. The size of embeddings d is 2.2 Pretrained embeddings often set between 100 and 1000 (Bahdanau et al., Pretrained embeddings, as opposed to random ini- 2014; Vaswani et al., 2017; Gehring et al., 2017). tialization, could work better, because they already contain some information about word relations. 2.1 Initialization methods To obtain pretrained embeddings, we can train Many different methods can be used to initialize a randomly initialized model from the normal dis- the values in E at the beginning of neural network tribution with a standard deviation of 0.01, extract training. We distinguish between randomly initial- embeddings from the final model and use them as ized and pretrained embeddings, where the latter pretrained embeddings for the following trainings can be further divided into embeddings pretrained 2Aside from related NN task papers such as Bahdanau et al. (2014) or Gehring et al. (2017), we also checked several 1See https://code.google.com/archive/p/ popular neural network frameworks (TensorFlow, Theano, word2vec/. Torch, ...) to collect various initialization parameters. on the same task. Such embeddings contain infor- are no pretrained embeddings available for Czech. mation useful for the task in question and we refer The goal of the language model (LM) is to pre- to them as self-pretrain. dict the next word based on the history of previous A more common approach is to download words. Language modeling can be thus seen as some ready-made “generic” embeddings such as (neural) machine translation without the encoder Word2Vec and GloVe, whose are not directly part: no source sentence is given to translate and related to the final task but show to contain we only predict words conditioned on the previ- many morpho-syntactic relations between words ous word and the state computed from predicted (Mikolov et al., 2013; Kocmi and Bojar, 2016). words. Therefore the parameters of the neural net- Those embeddings are trained on billions of work are the same as for the MT decoder. The only monolingual examples and can be easily reused in difference is that we use dropout with keep prob- most existing neural architectures. ability of 0.7 (Srivastava et al., 2014). The gener- ated sentence is evaluated as the perplexity against 3 Experimental setup the gold output words (English in our case). The third task is the POS tagging (TAG). We This section describes the neural models we use use our custom network architecture: The model for our four tasks and the training and testing starts with a bidirectional encoder as in MT. For datasets. each encoder state, a fully connected linear layer 3.1 Models description then predicts a tag. The parameters are set to be equal to the encoder in MT, the predicting layer For all our four tasks (MT, LM, TAG, and LEM), have a size equal to the number of tags.
Recommended publications
  • NVIDIA CEO Jensen Huang to Host AI Pioneers Yoshua Bengio, Geoffrey Hinton and Yann Lecun, and Others, at GTC21
    NVIDIA CEO Jensen Huang to Host AI Pioneers Yoshua Bengio, Geoffrey Hinton and Yann LeCun, and Others, at GTC21 Online Conference to Feature Jensen Huang Keynote and 1,300 Talks from Leaders in Data Center, Networking, Graphics and Autonomous Vehicles NVIDIA today announced that its CEO and founder Jensen Huang will host renowned AI pioneers Yoshua Bengio, Geoffrey Hinton and Yann LeCun at the company’s upcoming technology conference, GTC21, running April 12-16. The event will kick off with a news-filled livestreamed keynote by Huang on April 12 at 8:30 am Pacific. Bengio, Hinton and LeCun won the 2018 ACM Turing Award, known as the Nobel Prize of computing, for breakthroughs that enabled the deep learning revolution. Their work underpins the proliferation of AI technologies now being adopted around the world, from natural language processing to autonomous machines. Bengio is a professor at the University of Montreal and head of Mila - Quebec Artificial Intelligence Institute; Hinton is a professor at the University of Toronto and a researcher at Google; and LeCun is a professor at New York University and chief AI scientist at Facebook. More than 100,000 developers, business leaders, creatives and others are expected to register for GTC, including CxOs and IT professionals focused on data center infrastructure. Registration is free and is not required to view the keynote. In addition to the three Turing winners, major speakers include: Girish Bablani, Corporate Vice President, Microsoft Azure John Bowman, Director of Data Science, Walmart
    [Show full text]
  • On Recurrent and Deep Neural Networks
    On Recurrent and Deep Neural Networks Razvan Pascanu Advisor: Yoshua Bengio PhD Defence Universit´ede Montr´eal,LISA lab September 2014 Pascanu On Recurrent and Deep Neural Networks 1/ 38 Studying the mechanism behind learning provides a meta-solution for solving tasks. Motivation \A computer once beat me at chess, but it was no match for me at kick boxing" | Emo Phillips Pascanu On Recurrent and Deep Neural Networks 2/ 38 Motivation \A computer once beat me at chess, but it was no match for me at kick boxing" | Emo Phillips Studying the mechanism behind learning provides a meta-solution for solving tasks. Pascanu On Recurrent and Deep Neural Networks 2/ 38 I fθ(x) = f (θ; x) ? F I f = arg minθ Θ EEx;t π [d(fθ(x); t)] 2 ∼ Supervised Learing I f :Θ D T F × ! Pascanu On Recurrent and Deep Neural Networks 3/ 38 ? I f = arg minθ Θ EEx;t π [d(fθ(x); t)] 2 ∼ Supervised Learing I f :Θ D T F × ! I fθ(x) = f (θ; x) F Pascanu On Recurrent and Deep Neural Networks 3/ 38 Supervised Learing I f :Θ D T F × ! I fθ(x) = f (θ; x) ? F I f = arg minθ Θ EEx;t π [d(fθ(x); t)] 2 ∼ Pascanu On Recurrent and Deep Neural Networks 3/ 38 Optimization for learning θ[k+1] θ[k] Pascanu On Recurrent and Deep Neural Networks 4/ 38 Neural networks Output neurons Last hidden layer bias = 1 Second hidden layer First hidden layer Input layer Pascanu On Recurrent and Deep Neural Networks 5/ 38 Recurrent neural networks Output neurons Output neurons Last hidden layer bias = 1 bias = 1 Recurrent Layer Second hidden layer First hidden layer Input layer Input layer (b) Recurrent
    [Show full text]
  • Yoshua Bengio and Gary Marcus on the Best Way Forward for AI
    Yoshua Bengio and Gary Marcus on the Best Way Forward for AI Transcript of the 23 December 2019 AI Debate, hosted at Mila Moderated and transcribed by Vincent Boucher, Montreal AI AI DEBATE : Yoshua Bengio | Gary Marcus — Organized by MONTREAL.AI and hosted at Mila, on Monday, December 23, 2019, from 6:30 PM to 8:30 PM (EST) Slides, video, readings and more can be found on the MONTREAL.AI debate webpage. Transcript of the AI Debate Opening Address | Vincent Boucher Good Evening from Mila in Montreal Ladies & Gentlemen, Welcome to the “AI Debate”. I am Vincent Boucher, Founding Chairman of Montreal.AI. Our participants tonight are Professor GARY MARCUS and Professor YOSHUA BENGIO. Professor GARY MARCUS is a Scientist, Best-Selling Author, and Entrepreneur. Professor MARCUS has published extensively in neuroscience, genetics, linguistics, evolutionary psychology and artificial intelligence and is perhaps the youngest Professor Emeritus at NYU. He is Founder and CEO of Robust.AI and the author of five books, including The Algebraic Mind. His newest book, Rebooting AI: Building Machines We Can Trust, aims to shake up the field of artificial intelligence and has been praised by Noam Chomsky, Steven Pinker and Garry Kasparov. Professor YOSHUA BENGIO is a Deep Learning Pioneer. In 2018, Professor BENGIO was the computer scientist who collected the largest number of new citations worldwide. In 2019, he received, jointly with Geoffrey Hinton and Yann LeCun, the ACM A.M. Turing Award — “the Nobel Prize of Computing”. He is the Founder and Scientific Director of Mila — the largest university-based research group in deep learning in the world.
    [Show full text]
  • Hello, It's GPT-2
    Hello, It’s GPT-2 - How Can I Help You? Towards the Use of Pretrained Language Models for Task-Oriented Dialogue Systems Paweł Budzianowski1;2;3 and Ivan Vulic´2;3 1Engineering Department, Cambridge University, UK 2Language Technology Lab, Cambridge University, UK 3PolyAI Limited, London, UK pfb30@cam.ac.uk, iv250@cam.ac.uk Abstract (Young et al., 2013). On the other hand, open- domain conversational bots (Li et al., 2017; Serban Data scarcity is a long-standing and crucial et al., 2017) can leverage large amounts of freely challenge that hinders quick development of available unannotated data (Ritter et al., 2010; task-oriented dialogue systems across multiple domains: task-oriented dialogue models are Henderson et al., 2019a). Large corpora allow expected to learn grammar, syntax, dialogue for training end-to-end neural models, which typ- reasoning, decision making, and language gen- ically rely on sequence-to-sequence architectures eration from absurdly small amounts of task- (Sutskever et al., 2014). Although highly data- specific data. In this paper, we demonstrate driven, such systems are prone to producing unre- that recent progress in language modeling pre- liable and meaningless responses, which impedes training and transfer learning shows promise their deployment in the actual conversational ap- to overcome this problem. We propose a task- oriented dialogue model that operates solely plications (Li et al., 2017). on text input: it effectively bypasses ex- Due to the unresolved issues with the end-to- plicit policy and language generation modules. end architectures, the focus has been extended to Building on top of the TransferTransfo frame- retrieval-based models.
    [Show full text]
  • Hierarchical Multiscale Recurrent Neural Networks
    Published as a conference paper at ICLR 2017 HIERARCHICAL MULTISCALE RECURRENT NEURAL NETWORKS Junyoung Chung, Sungjin Ahn & Yoshua Bengio ∗ Département d’informatique et de recherche opérationnelle Université de Montréal {junyoung.chung,sungjin.ahn,yoshua.bengio}@umontreal.ca ABSTRACT Learning both hierarchical and temporal representation has been among the long- standing challenges of recurrent neural networks. Multiscale recurrent neural networks have been considered as a promising approach to resolve this issue, yet there has been a lack of empirical evidence showing that this type of models can actually capture the temporal dependencies by discovering the latent hierarchical structure of the sequence. In this paper, we propose a novel multiscale approach, called the hierarchical multiscale recurrent neural network, that can capture the latent hierarchical structure in the sequence by encoding the temporal dependencies with different timescales using a novel update mechanism. We show some evidence that the proposed model can discover underlying hierarchical structure in the sequences without using explicit boundary information. We evaluate our proposed model on character-level language modelling and handwriting sequence generation. 1 INTRODUCTION One of the key principles of learning in deep neural networks as well as in the human brain is to obtain a hierarchical representation with increasing levels of abstraction (Bengio, 2009; LeCun et al., 2015; Schmidhuber, 2015). A stack of representation layers, learned from the data in a way to optimize the target task, make deep neural networks entertain advantages such as generalization to unseen examples (Hoffman et al., 2013), sharing learned knowledge among multiple tasks, and discovering disentangling factors of variation (Kingma & Welling, 2013).
    [Show full text]
  • I2t2i: Learning Text to Image Synthesis with Textual Data Augmentation
    I2T2I: LEARNING TEXT TO IMAGE SYNTHESIS WITH TEXTUAL DATA AUGMENTATION Hao Dong, Jingqing Zhang, Douglas McIlwraith, Yike Guo Data Science Institute, Imperial College London ABSTRACT of text-to-image synthesis is still far from being solved. Translating information between text and image is a funda- GAN-CLS I2T2I mental problem in artificial intelligence that connects natural language processing and computer vision. In the past few A yellow years, performance in image caption generation has seen sig- school nificant improvement through the adoption of recurrent neural bus parked in networks (RNN). Meanwhile, text-to-image generation begun a parking to generate plausible images using datasets of specific cate- lot. gories like birds and flowers. We’ve even seen image genera- A man tion from multi-category datasets such as the Microsoft Com- swinging a baseball mon Objects in Context (MSCOCO) through the use of gen- bat over erative adversarial networks (GANs). Synthesizing objects home plate. with a complex shape, however, is still challenging. For ex- ample, animals and humans have many degrees of freedom, which means that they can take on many complex shapes. We propose a new training method called Image-Text-Image Fig. 1: Comparing text-to-image synthesis with and without (I2T2I) which integrates text-to-image and image-to-text (im- textual data augmentation. I2T2I synthesizes the human pose age captioning) synthesis to improve the performance of text- and bus color better. to-image synthesis. We demonstrate that I2T2I can generate better multi-categories images using MSCOCO than the state- Recently, both fractionally-strided convolutional and con- of-the-art.
    [Show full text]
  • Yoshua Bengio
    Yoshua Bengio Département d’informatique et recherche opérationnelle, Université de Montréal Canada Research Chair Phone : 514-343-6804 on Statistical Learning Algorithms Fax : 514-343-5834 Yoshua.Bengio@umontreal.ca www.iro.umontreal.ca/∼bengioy Titles and Distinctions • Full Professor, Université de Montréal, since 2002. Previously associate professor (1997-2002) and assistant professor (1993-1997). • Canada Research Chair on Statistical Learning Algorithms since 2000 (tier 2 : 2000-2005, tier 1 since 2006). • NSERC-Ubisoft Industrial Research Chair, since 2011. Previously NSERC- CGI Industrial Chair, 2005-2010. • Recipient of the ACFAS Urgel-Archambault 2009 prize (covering physics, ma- thematics, computer science, and engineering). • Fellow, CIFAR (Canadian Institute For Advanced Research), since 2004. Co- director of the CIFAR NCAP program since 2014. • Member of the board of the Neural Information Processing Systems (NIPS) Foundation, since 2010. • Action Editor, Journal of Machine Learning Research (JMLR), Neural Compu- tation, Foundations and Trends in Machine Learning, and Computational Intelligence. Member of the 2012 editor-in-chief nominating committee for JMLR. • Fellow, CIRANO (Centre Inter-universitaire de Recherche en Analyse des Organi- sations), since 1997. • Previously Associate Editor, Machine Learning, IEEE Trans. on Neural Net- works. • Founder (1993) and head of the Laboratoire d’Informatique des Systèmes Adaptatifs (LISA), and the Montreal Institute for Learning Algorithms (MILA), currently including 5 faculty, 40 students, 5 post-docs, 5 researchers on staff, and numerous visitors. This is probably the largest concentration of deep learning researchers in the world (around 60 researchers). • Member of the board of the Centre de Recherches Mathématiques, UdeM, 1999- 2009. Member of the Awards Committee of the Canadian Association for Computer Science (2012-2013).
    [Show full text]
  • Generalized Denoising Auto-Encoders As Generative Models
    Generalized Denoising Auto-Encoders as Generative Models Yoshua Bengio, Li Yao, Guillaume Alain, and Pascal Vincent Departement´ d’informatique et recherche operationnelle,´ Universite´ de Montreal´ Abstract Recent work has shown how denoising and contractive autoencoders implicitly capture the structure of the data-generating density, in the case where the cor- ruption noise is Gaussian, the reconstruction error is the squared error, and the data is continuous-valued. This has led to various proposals for sampling from this implicitly learned density function, using Langevin and Metropolis-Hastings MCMC. However, it remained unclear how to connect the training procedure of regularized auto-encoders to the implicit estimation of the underlying data- generating distribution when the data are discrete, or using other forms of corrup- tion process and reconstruction errors. Another issue is the mathematical justifi- cation which is only valid in the limit of small corruption noise. We propose here a different attack on the problem, which deals with all these issues: arbitrary (but noisy enough) corruption, arbitrary reconstruction loss (seen as a log-likelihood), handling both discrete and continuous-valued variables, and removing the bias due to non-infinitesimal corruption noise (or non-infinitesimal contractive penalty). 1 Introduction Auto-encoders learn an encoder function from input to representation and a decoder function back from representation to input space, such that the reconstruction (composition of encoder and de- coder) is good for training examples. Regularized auto-encoders also involve some form of regu- larization that prevents the auto-encoder from simply learning the identity function, so that recon- struction error will be low at training examples (and hopefully at test examples) but high in general.
    [Show full text]
  • Exposing GAN-Synthesized Faces Using Landmark Locations
    Exposing GAN-synthesized Faces Using Landmark Locations Xin Yang∗, Yuezun Li∗, Honggang Qiy, Siwei Lyu∗ ∗ Computer Science Department, University at Albany, State University of New York, USA y School of Computer and Control Engineering, University of the Chinese Academy of Sciences, China ABSTRACT Unlike previous image/video manipulation methods, realistic Generative adversary networks (GANs) have recently led to highly images are generated completely from random noise through a realistic image synthesis results. In this work, we describe a new deep neural network. Current detection methods are based on low method to expose GAN-synthesized images using the locations of level features such as color disparities [10, 13], or using the whole the facial landmark points. Our method is based on the observa- image as input to a neural network to extract holistic features [19]. tions that the facial parts configuration generated by GAN models In this work, we develop a new GAN-synthesized face detection are different from those of the real faces, due to the lack of global method based on a more semantically meaningful features, namely constraints. We perform experiments demonstrating this phenome- the locations of facial landmark points. This is because the GAN- non, and show that an SVM classifier trained using the locations of synthesized faces exhibit certain abnormality in the facial landmark facial landmark points is sufficient to achieve good classification locations. Specifically, The GAN-based face synthesis algorithm performance for GAN-synthesized faces. can generate face parts (e.g., eyes, nose, skin, and mouth, etc) with a great level of realistic details, yet it does not have an explicit KEYWORDS constraint over the locations of these parts in a face.
    [Show full text]
  • Extracting and Composing Robust Features with Denoising Autoencoders
    Extracting and Composing Robust Features with Denoising Autoencoders Pascal Vincent, Hugo Larochelle, Yoshua Bengio, Pierre-Antoine Manzagol Dept. IRO, Universit´ede Montr´eal C.P. 6128, Montreal, Qc, H3C 3J7, Canada http://www.iro.umontreal.ca/∼lisa Technical Report 1316, February 2008 Abstract Previous work has shown that the difficulties in learning deep genera- tive or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representa- tions robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite. 1 Introduction Recent theoretical studies indicate that deep architectures (Bengio & Le Cun, 2007; Bengio, 2007) may be needed to efficiently model complex distributions and achieve better generalization performance on challenging recognition tasks. The belief that additional levels of functional composition will yield increased representational and modeling power is not new (McClelland et al., 1986; Hin- ton, 1989; Utgoff & Stracuzzi, 2002). However, in practice, learning in deep architectures has proven to be difficult. One needs only to ponder the diffi- cult problem of inference in deep directed graphical models, due to “explaining away”. Also looking back at the history of multi-layer neural networks, their difficult optimization (Bengio et al., 2007; Bengio, 2007) has long prevented reaping the expected benefits of going beyond one or two hidden layers.
    [Show full text]
  • The Creation and Detection of Deepfakes: a Survey
    1 The Creation and Detection of Deepfakes: A Survey YISROEL MIRSKY∗, Georgia Institute of Technology and Ben-Gurion University WENKE LEE, Georgia Institute of Technology Generative deep learning algorithms have progressed to a point where it is dicult to tell the dierence between what is real and what is fake. In 2018, it was discovered how easy it is to use this technology for unethical and malicious applications, such as the spread of misinformation, impersonation of political leaders, and the defamation of innocent individuals. Since then, these ‘deepfakes’ have advanced signicantly. In this paper, we explore the creation and detection of deepfakes an provide an in-depth view how these architectures work. e purpose of this survey is to provide the reader with a deeper understanding of (1) how deepfakes are created and detected, (2) the current trends and advancements in this domain, (3) the shortcomings of the current defense solutions, and (4) the areas which require further research and aention. CCS Concepts: •Security and privacy ! Social engineering attacks; Human and societal aspects of security and privacy; •Computing methodologies ! Machine learning; Additional Key Words and Phrases: Deepfake, Deep fake, reenactment, replacement, face swap, generative AI, social engineering, impersonation ACM Reference format: Yisroel Mirsky and Wenke Lee. 2020. e Creation and Detection of Deepfakes: A Survey. ACM Comput. Surv. 1, 1, Article 1 (January 2020), 38 pages. DOI: XX.XXXX/XXXXXXX.XXXXXXX 1 INTRODUCTION A deepfake is content, generated by an articial intelligence, that is authentic in the eyes of a human being. e word deepfake is a combination of the words ‘deep learning’ and ‘fake’ and primarily relates to content generated by an articial neural network, a branch of machine learning.
    [Show full text]
  • Unsupervised Pretraining, Autoencoder and Manifolds
    Unsupervised Pretraining, Autoencoder and Manifolds Christian Herta Outline ● Autoencoders ● Unsupervised pretraining of deep networks with autoencoders ● Manifold-Hypotheses Autoencoder Problems of training of deep neural networks ● stochastic gradient descent + standard algorithm "Backpropagation": vanishing or exploding gradient: "Vanishing Gradient Problem" [Hochreiter 1991] ● only shallow nets are trainable => feature engineering ● for applications (in the past): most only one layer Solutions for training deep nets ● layer wise pretraining (first by [Hin06] with RBM) – with unlabeled data (unsupervised pretraining) ● Restricted Boltzmann Machines (BM) ● Stacked autoencoder ● Contrastive estimation ● more effective optimization – second order methods, like "Hessian free Optimization" ● more carefully initialization + other neuron types (e.g. linear rectified/maxout) + dropout+ more sophisticated momentum (e.g. nesterov momentum); see e.g. [Glo11] Representation Learning ● "Feature Learning" statt "Feature Engineering" ● Multi Task Learning: – learned Features (distributed representations) can be used for different tasks – unsupervised pretraining + supervised finetuning out task A out task B out task C hidden hidden Input from Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, Samy Bengio; Why Does Unsupervised Pre-training Help Deep Learning? JMLR2010 Layer wise pretraining with autoencoders Autoencoder ● Goal: encoder decoder reconstruction of the input input = output ● different constraints on
    [Show full text]