Deep Learning Book, by Ian Goodfellow, Yoshua Bengio and Aaron Courville Chapter 6 :Deep Feedforward Networks

Total Page:16

File Type:pdf, Size:1020Kb

Deep Learning Book, by Ian Goodfellow, Yoshua Bengio and Aaron Courville Chapter 6 :Deep Feedforward Networks Deep Learning book, by Ian Goodfellow, Yoshua Bengio and Aaron Courville Chapter 6 :Deep Feedforward Networks Benoit Massé Dionyssos Kounades-Bastian Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 1 / 25 Linear regression (and classication) Input vector x Output vector y Parameters Weight W and bias b Prediction : y = W>x + b Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 2 / 25 Linear regression (and classication) Input vector x Output vector y Parameters Weight W and bias b Prediction : y = W>x + b W b x u y Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 2 / 25 Linear regression (and classication) Input vector x Output vector y Parameters Weight W and bias b Prediction : y = W>x + b W W x1 11 ::: 23 b1 u1 y1 x2 b2 u2 y2 x3 Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 2 / 25 Linear regression (and classication) Input vector x Output vector y Parameters Weight W and bias b Prediction : y = W>x + b W b x1 ; y1 x2 y2 x3 Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 2 / 25 Linear regression (and classication) Advantages Easy to use Easy to train, low risk of overtting Drawbacks Some problems are inherently non-linear Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 3 / 25 Linear regressor W b x1 ; y x2 There is no value for W and b 2 such that 8(x1; x2) 2 f0; 1g > x1 W + b = xor(x1; x2) x2 Solving XOR Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 4 / 25 Solving XOR Linear regressor W b x1 ; y x2 There is no value for W and b 2 such that 8(x1; x2) 2 f0; 1g > x1 W + b = xor(x1; x2) x2 Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 4 / 25 Strictly equivalent : The composition of two linear operation is still a linear operation Solving XOR What about... ? W; b V c x1 u1 ; y x2 u2 Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 5 / 25 Solving XOR What about... ? W; b V c x1 u1 ; y x2 u2 Strictly equivalent : The composition of two linear operation is still a linear operation Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 5 / 25 Solving XOR And about... ? W; b φ V c x1 u1 h1 ; y x2 u2 h2 In which φ(x) = maxf0; xg Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 6 / 25 Solving XOR And about... ? W; b φ V c x1 u1 h1 ; y x2 u2 h2 In which φ(x) = maxf0; xg It is possible ! 1 1 0 1 With W = , b = , V = and c = 0, 1 1 −1 −2 Vφ(Wx + b) = xor(x1; x2) Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 6 / 25 Neural network with one hidden layer Compact representation W; b; φ V; c x h y Neural network Hidden layer with non-linearity ! can represent broader class of function Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 7 / 25 Universal approximation theorem Theorem A neural network with one hidden layer can approximate any continuous function m More formally, given a continuous function f : Cn 7! R where n Cn is a compact subset of R , K " X > 8"; 9fNN : x ! vi φ(wi x + bi ) + c i=1 such that " 8x 2 Cn; jjf (x) − fNN (x)jj < " Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 8 / 25 Using the network Even if we nd a way to obtain the network, the size of the hidden layer may be prohibitively large. Problems Obtaining the network The universal theorem gives no information about HOW to obtain such a network Size of the hidden layer h Values of W and b Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 9 / 25 Problems Obtaining the network The universal theorem gives no information about HOW to obtain such a network Size of the hidden layer h Values of W and b Using the network Even if we nd a way to obtain the network, the size of the hidden layer may be prohibitively large. Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 9 / 25 Deep neural network Why Deep ? Let's stack l hidden layers one after the other; l is called the length of the network. W1; b1; φ W2; b2; φ Wl ; bl ; φ V; c ... x h1 hl y Properties of DNN The universal approximation theorem also apply Some functions can be approximated by a DNN with N hidden unit, and would require O(eN ) hidden units to be represented by a shallow network. Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 10 / 25 Remaining problem How to get this DNN ? Summary Comparison Linear classier − Limited representational power + Simple Shallow Neural network (Exactly one hidden layer) + Unlimited representational power − Sometimes prohibitively wide Deep Neural network + Unlimited representational power + Relatively small number of hidden units needed Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 11 / 25 Summary Comparison Linear classier − Limited representational power + Simple Shallow Neural network (Exactly one hidden layer) + Unlimited representational power − Sometimes prohibitively wide Deep Neural network + Unlimited representational power + Relatively small number of hidden units needed Remaining problem How to get this DNN ? Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 11 / 25 Parameters When the architecture is dened, we need to train the DNN W1, b1,..., Wl , bl On the path of getting my own DNN Hyperparameters First, we need to dene the architecture of the DNN The depth l The size of the hidden layers n1,..., nl The activation function φ The output unit Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 12 / 25 On the path of getting my own DNN Hyperparameters First, we need to dene the architecture of the DNN The depth l The size of the hidden layers n1,..., nl The activation function φ The output unit Parameters When the architecture is dened, we need to train the DNN W1, b1,..., Wl , bl Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 12 / 25 The activation function φ ReLU g : x 7! maxf0; xg −x −1 Sigmoid σ : x 7! (1 + e ) Many others : tanh, RBF, softplus... The output unit > Linear output E[y] = V hl + c For regression with Gaussian distribution y ∼ N (E[y]; I) > Sigmoid output y^ = σ(w hl + b) For classication with Bernouilli distribution P(y = 1jx) =y ^ Hyperparameters The depth l The size of the hidden layers n1,..., nl Strongly depend on the problem to solve Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 13 / 25 The output unit > Linear output E[y] = V hl + c For regression with Gaussian distribution y ∼ N (E[y]; I) > Sigmoid output y^ = σ(w hl + b) For classication with Bernouilli distribution P(y = 1jx) =y ^ Hyperparameters The depth l The size of the hidden layers n1,..., nl Strongly depend on the problem to solve The activation function φ ReLU g : x 7! maxf0; xg −x −1 Sigmoid σ : x 7! (1 + e ) Many others : tanh, RBF, softplus... Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 13 / 25 Hyperparameters The depth l The size of the hidden layers n1,..., nl Strongly depend on the problem to solve The activation function φ ReLU g : x 7! maxf0; xg −x −1 Sigmoid σ : x 7! (1 + e ) Many others : tanh, RBF, softplus... The output unit > Linear output E[y] = V hl + c For regression with Gaussian distribution y ∼ N (E[y]; I) > Sigmoid output y^ = σ(w hl + b) For classication with Bernouilli distribution P(y = 1jx) =y ^ Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 13 / 25 Parameters Training Objective Let's dene θ = (W1; b1;:::; Wl ; bl ). We suppose we have a set of inputs X = (x1;:::; xN ) and a set of expected outputs Y = (y1;:::; yN ). The goal is to nd a neural network fNN such that i i 8i; fNN (x ; θ) ' y : Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 14 / 25 Parameters Training Cost function To evaluate the error that our current network makes, let's dene a cost function L(X; Y; θ). The goal becomes to nd argmin L(X; Y; θ) θ Loss function Should represent a combination of the distances between every i i y and the corresponding fNN (x ; θ) Mean square error (rare) Cross-entropy Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 15 / 25 Parameters Training Find the minimum The basic idea consists in computing θ^ such that ^ rθL(X; Y; θ) = 0: This is dicult to solve analytically e.g. when θ have millions of degrees of freedom. Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 16 / 25 Parameters Training Gradient descent Let's use a numerical way to optimize θ, called the gradient descent (section 4.3). The idea is that f (θ − "u) ' f (θ) − "u>rf (θ) So if we take u = rf (θ), we have u>u > 0 and then f (θ − "u) ' f (θ) − "u>u < f (θ): If f is a function to minimize, we have an update rule that improves our estimate. Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 17 / 25 Problem ^ How to estimate eciently rθL(X; Y; θ) ? Back-propagation algorithm Parameters Training Gradient descent algorithm 1 Have an estimate θ^ of the parameters 2 ^ Compute rθL(X; Y; θ) 3 ^ ^ Update θ − θ − "rθL 4 Repeat step 2-3 until rθL < threshold Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 18 / 25 Parameters Training Gradient descent algorithm 1 Have an estimate θ^ of the parameters 2 ^ Compute rθL(X; Y; θ) 3 ^ ^ Update θ − θ − "rθL 4 Repeat step 2-3 until rθL < threshold Problem ^ How to estimate eciently rθL(X; Y; θ) ? Back-propagation algorithm Benoit Massé, Dionyssos Kounades-Bastian Deep Feedforward Networks 18 / 25 Back-propagation for Parameter Learning Consider the architecture: w2 w1 y φ φ x with function: y = φ w2φ(w1x) ; N some training pairs T xn yn , and = ^ ; ^ n=1 an activation-function φ().
Recommended publications
  • Recurrent Neural Networks
    Sequence Modeling: Recurrent and Recursive Nets Lecture slides for Chapter 10 of Deep Learning www.deeplearningbook.org Ian Goodfellow 2016-09-27 Adapted by m.n. for CMPS 392 RNN • RNNs are a family of neural networks for processing sequential data • Recurrent networks can scale to much longer sequences than would be practical for networks without sequence-based specialization. • Most recurrent networks can also process sequences of variable length. • Based on parameter sharing q If we had separate parameters for each value of the time index, o we could not generalize to sequence lengths not seen during training, o nor share statistical strength across different sequence lengths, o and across different positions in time. (Goodfellow 2016) Example • Consider the two sentences “I went to Nepal in 2009” and “In 2009, I went to Nepal.” • How a machine learning can extract the year information? q A traditional fully connected feedforward network would have separate parameters for each input feature, q so it would need to learn all of the rules of the language separately at each position in the sentence. • By comparison, a recurrent neural network shares the same weights across several time steps. (Goodfellow 2016) RNN vs. 1D convolutional • The output of convolution is a sequence where each member of the output is a function of a small number of neighboring members of the input. • Recurrent networks share parameters in a different way. q Each member of the output is a function of the previous members of the output q Each member of the output is produced using the same update rule applied to the previous outputs.
    [Show full text]
  • Ian Goodfellow, Staff Research Scientist, Google Brain CVPR
    MedGAN ID-CGAN CoGAN LR-GAN CGAN IcGAN b-GAN LS-GAN LAPGAN DiscoGANMPM-GAN AdaGAN AMGAN iGAN InfoGAN CatGAN IAN LSGAN Introduction to GANs SAGAN McGAN Ian Goodfellow, Staff Research Scientist, Google Brain MIX+GAN MGAN CVPR Tutorial on GANs BS-GAN FF-GAN Salt Lake City, 2018-06-22 GoGAN C-VAE-GAN C-RNN-GAN DR-GAN DCGAN MAGAN 3D-GAN CCGAN AC-GAN BiGAN GAWWN DualGAN CycleGAN Bayesian GAN GP-GAN AnoGAN EBGAN DTN Context-RNN-GAN MAD-GAN ALI f-GAN BEGAN AL-CGAN MARTA-GAN ArtGAN MalGAN Generative Modeling: Density Estimation Training Data Density Function (Goodfellow 2018) Generative Modeling: Sample Generation Training Data Sample Generator (CelebA) (Karras et al, 2017) (Goodfellow 2018) Adversarial Nets Framework D tries to make D(G(z)) near 0, D(x) tries to be G tries to make near 1 D(G(z)) near 1 Differentiable D function D x sampled from x sampled from data model Differentiable function G Input noise z (Goodfellow et al., 2014) (Goodfellow 2018) Self-Play 1959: Arthur Samuel’s checkers agent (OpenAI, 2017) (Silver et al, 2017) (Bansal et al, 2017) (Goodfellow 2018) 3.5 Years of Progress on Faces 2014 2015 2016 2017 (Brundage et al, 2018) (Goodfellow 2018) p.15 General Framework for AI & Security Threats Published as a conference paper at ICLR 2018 <2 Years of Progress on ImageNet Odena et al 2016 monarch butterfly goldfinch daisy redshank grey whale Miyato et al 2017 monarch butterfly goldfinch daisy redshank grey whale Zhang et al 2018 monarch butterfly goldfinch (Goodfellow 2018) daisy redshank grey whale Figure 7: 128x128 pixel images generated by SN-GANs trained on ILSVRC2012 dataset.
    [Show full text]
  • Defending Black Box Facial Recognition Classifiers Against Adversarial Attacks
    Defending Black Box Facial Recognition Classifiers Against Adversarial Attacks Rajkumar Theagarajan and Bir Bhanu Center for Research in Intelligent Systems, University of California, Riverside, CA 92521 [email protected], [email protected] Abstract attacker has full knowledge about the classification model’s parameters and architecture, whereas in the Black box set- Defending adversarial attacks is a critical step towards ting [50] the attacker does not have this knowledge. In this reliable deployment of deep learning empowered solutions paper we focus on the Black box based adversarial attacks. for biometrics verification. Current approaches for de- Current defenses against adversarial attacks can be clas- fending Black box models use the classification accuracy sified into four approaches: 1) modifying the training data, of the Black box as a performance metric for validating 2) modifying the model, 3) using auxiliary tools, and their defense. However, classification accuracy by itself is 4) detecting and rejecting adversarial examples. Modi- not a reliable metric to determine if the resulting image is fying the training data involves augmenting the training “adversarial-free”. This is a serious problem for online dataset with adversarial examples and re-training the classi- biometrics verification applications where the ground-truth fier [22, 30, 64, 73] or performing N number of pre-selected of the incoming image is not known and hence we cannot image transformations in a random order [13, 17, 26, 52]. compute the accuracy of the classifier or know if the image Modifying the model involves pruning the architecture of is ”adversarial-free” or not. This paper proposes a novel the classifier [38, 51, 69] or adding pre/post-processing lay- framework for defending Black box systems from adversar- ers to it [9, 12, 71].
    [Show full text]
  • The Creation and Detection of Deepfakes: a Survey
    1 The Creation and Detection of Deepfakes: A Survey YISROEL MIRSKY∗, Georgia Institute of Technology and Ben-Gurion University WENKE LEE, Georgia Institute of Technology Generative deep learning algorithms have progressed to a point where it is dicult to tell the dierence between what is real and what is fake. In 2018, it was discovered how easy it is to use this technology for unethical and malicious applications, such as the spread of misinformation, impersonation of political leaders, and the defamation of innocent individuals. Since then, these ‘deepfakes’ have advanced signicantly. In this paper, we explore the creation and detection of deepfakes an provide an in-depth view how these architectures work. e purpose of this survey is to provide the reader with a deeper understanding of (1) how deepfakes are created and detected, (2) the current trends and advancements in this domain, (3) the shortcomings of the current defense solutions, and (4) the areas which require further research and aention. CCS Concepts: •Security and privacy ! Social engineering attacks; Human and societal aspects of security and privacy; •Computing methodologies ! Machine learning; Additional Key Words and Phrases: Deepfake, Deep fake, reenactment, replacement, face swap, generative AI, social engineering, impersonation ACM Reference format: Yisroel Mirsky and Wenke Lee. 2020. e Creation and Detection of Deepfakes: A Survey. ACM Comput. Surv. 1, 1, Article 1 (January 2020), 38 pages. DOI: XX.XXXX/XXXXXXX.XXXXXXX 1 INTRODUCTION A deepfake is content, generated by an articial intelligence, that is authentic in the eyes of a human being. e word deepfake is a combination of the words ‘deep learning’ and ‘fake’ and primarily relates to content generated by an articial neural network, a branch of machine learning.
    [Show full text]
  • Ian Goodfellow Deep Learning Pdf
    Ian goodfellow deep learning pdf Continue Ian Goodfellow Born1985/1986 (age 34-35)NationalityAmericanAlma materStanford UniversityUniversit de Montr'aKnown for generative competitive networks, Competitive examplesSpopapocomputer scienceInstitutionApple. Google BrainOpenAIThesisDeep Learning Views and Its Application to Computer Vision (2014)Doctoral Adviser Yashua BengioAron Kurville Websitewww.iangoodfellow.com Jan J. Goodfellow (born 1985 or 1986) is a machine learning researcher who currently works at Apple Inc. as director of machine learning in a special project group. Previously, he worked as a researcher at Google Brain. He has made a number of contributions to the field of deep learning. Biography Of Goodfellow received a bachelor's and doctorate in computer science from Stanford University under the direction of Andrew Ng and a PhD in Machine Learning from the University of Montreal in April 2014 under the direction of Joshua Bengio and Aaron Kurwill. His dissertation is called Deep Study of Representations and its application to computer vision. After graduating from university, Goodfellow joined Google as part of the Google Brain research team. He then left Google to join the newly founded OpenAI Institute. He returned to Google Research in March 2017. Goodfellow is best known for inventing generative adversarial networks. He is also the lead author of the Deep Learning textbook. At Google, he developed a system that allows Google Maps to automatically transcribe addresses from photos taken by Street View cars, and demonstrated the vulnerabilities of machine learning systems. In 2017, Goodfellow was mentioned in 35 innovators under the age of 35 at MIT Technology Review. In 2019, it was included in the list of 100 global thinkers Foreign Policy.
    [Show full text]
  • Semi-Supervised Neural Architecture Search
    Semi-Supervised Neural Architecture Search 1Renqian Luo,∗2Xu Tan, 2Rui Wang, 2Tao Qin, 1Enhong Chen, 2Tie-Yan Liu 1University of Science and Technology of China, Hefei, China 2Microsoft Research Asia, Beijing, China [email protected], [email protected] 2{xuta, ruiwa, taoqin, tyliu}@microsoft.com Abstract Neural architecture search (NAS) relies on a good controller to generate better architectures or predict the accuracy of given architectures. However, training the controller requires both abundant and high-quality pairs of architectures and their accuracy, while it is costly to evaluate an architecture and obtain its accuracy. In this paper, we propose SemiNAS, a semi-supervised NAS approach that leverages numerous unlabeled architectures (without evaluation and thus nearly no cost). Specifically, SemiNAS 1) trains an initial accuracy predictor with a small set of architecture-accuracy data pairs; 2) uses the trained accuracy predictor to predict the accuracy of large amount of architectures (without evaluation); and 3) adds the generated data pairs to the original data to further improve the predictor. The trained accuracy predictor can be applied to various NAS algorithms by predicting the accuracy of candidate architectures for them. SemiNAS has two advantages: 1) It reduces the computational cost under the same accuracy guarantee. On NASBench-101 benchmark dataset, it achieves comparable accuracy with gradient- based method while using only 1/7 architecture-accuracy pairs. 2) It achieves higher accuracy under the same computational cost. It achieves 94.02% test accuracy on NASBench-101, outperforming all the baselines when using the same number of architectures. On ImageNet, it achieves 23.5% top-1 error rate (under 600M FLOPS constraint) using 4 GPU-days for search.
    [Show full text]
  • Amora: Black-Box Adversarial Morphing Attack
    Amora: Black-box Adversarial Morphing Attack Run Wang1, Felix Juefei-Xu2, Qing Guo1,y, Yihao Huang3, Xiaofei Xie1, Lei Ma4, Yang Liu1,5 1Nanyang Technological University, Singapore 2Alibaba Group, USA 3East China Normal University, China 4Kyushu University, Japan 5Institute of Computing Innovation, Zhejiang University, China ABSTRACT Nowadays, digital facial content manipulation has become ubiq- uitous and realistic with the success of generative adversarial net- works (GANs), making face recognition (FR) systems suffer from unprecedented security concerns. In this paper, we investigate and introduce a new type of adversarial attack to evade FR systems by (a) Presentation spoofing attacks manipulating facial content, called adversarial morphing attack (a.k.a. Amora). In contrast to adversarial noise attack that perturbs pixel intensity values by adding human-imperceptible noise, our proposed adversarial morphing attack works at the semantic level that perturbs pixels spatially in a coherent manner. To tackle the (b) Adversarial noise attack black-box attack problem, we devise a simple yet effective joint dictionary learning pipeline to obtain a proprietary optical flow field for each attack. Our extensive evaluation on two popular FR systems demonstrates the effectiveness of our adversarial morphing attack at various levels of morphing intensity with smiling facial ex- pression manipulations. Both open-set and closed-set experimental (c) Adversarial morphing attack results indicate that a novel black-box adversarial attack based on local deformation is possible, and is vastly different from additive Figure 1: Three typical attacks on the FR systems. (a) presentation spoofing at- noise attacks. The findings of this work potentially pave anew tacks (e.g., print attack [14], disguise attack [50], mask attack [36], and replay research direction towards a more thorough understanding and attack [8]), (b) adversarial noise attack [14, 18, 42], (c) proposed Amora.
    [Show full text]
  • CS 2770: Computer Vision Generative Adversarial Networks
    CS 2770: Computer Vision Generative Adversarial Networks Prof. Adriana Kovashka University of Pittsburgh April 14, 2020 Plan for this lecture • Generative models: What are they? • Technique: Generative Adversarial Networks • Applications • Conditional GANs • Cycle-consistency loss • Dealing with sparse data, progressive training Supervised vs Unsupervised Learning Unsupervised Learning Data: x Just data, no labels! Goal: Learn some underlying hidden structure of the data Examples: Clustering, K-means clustering dimensionality reduction, feature learning, density estimation, etc. Lecture 13 - 10 Fei-Fei Li & Justin Johnson & May 18, 2017 SerenaSerena YoungYeung Supervised vs Unsupervised Learning Unsupervised Learning Data: x Just data, no labels! Goal: Learn some underlying hidden structure of the data Examples: Clustering, dimensionality reduction, feature Autoencoders learning, density estimation, etc. (Feature learning) Lecture 13 - Fei-Fei Li & Justin Johnson & May 18, 2017 SerenaSerena YoungYeung Generative Models Training data ~ pdata(x) Generated samples ~ pmodel(x) Want to learn pmodel(x) similar to pdata(x) Lecture 13 - Fei-Fei Li & Justin Johnson & May 18, 2017 SerenaSerena YoungYeung Generative Models Training data ~ pdata(x) Generated samples ~ pmodel(x) Want to learn pmodel(x) similar to pdata(x) Addresses density estimation, a core problem in unsupervised learning Several flavors: - Explicit density estimation: explicitly define and solve for pmodel(x) - Implicit density estimation: learn model that can sample from pmodel(x)
    [Show full text]
  • Practical Methodology
    Practical Methodology Lecture slides for Chapter 11 of Deep Learning www.deeplearningbook.org Ian Goodfellow 2016-09-26 What drives success in ML? What drives success in ML? Arcane knowledge Knowing how Mountains of dozens of to apply 3-4 of data? obscure algorithms? standard techniques? (2) (2) (2) h1 h2 h3 (1) (1) (1) (1) h1 h2 h3 h4 v1 v2 v3 (Goodfellow 2016) Example: Street View Address StreetNumber View Transcription Transcription (Goodfellow et al, 2014) (Goodfellow 2016) 3 Step Process Three Step Process • Use needs to define metric-based goals • Build an end-to-end system • Data-driven refinement (Goodfellow 2016) Identify needs Identify Needs • High accuracy or low accuracy? • Surgery robot: high accuracy • Celebrity look-a-like app: low accuracy (Goodfellow 2016) ChooseChoose Metrics Metrics • Accuracy? (% of examples correct) • Coverage? (% of examples processed) • Precision? (% of detections that are right) • Recall? (% of objects detected) • Amount of error? (For regression problems) (Goodfellow 2016) End-to-endEnd-to-end System system • Get up and running ASAP • Build the simplest viable system first • What baseline to start with though? • Copy state-of-the-art from related publication (Goodfellow 2016) DeepDeep or or Not? not? • Lots of noise, little structure -> not deep • Little noise, complex structure -> deep • Good shallow baseline: • Use what you know • Logistic regression, SVM, boosted tree are all good (Goodfellow 2016) What kind of deep? Choosing Architecture Family • No structure -> fully connected • Spatial structure
    [Show full text]
  • Convolutional Neural Networks • Development • Application • Architecture • Optimization 2
    Convolutionary Neural Networks Lingzhou Xue Penn State Department of Statistics [email protected] http://sites.psu.edu/sldm/deeplearning/ Materials • C. Lee Giles (Penn State) • http://clgiles.ist.psu.edu/IST597/index.html • Andrew Ng (Coursera, Stanford U., Google Brain, Biadu) • https://www.deeplearning.ai/ • Fei-Fei Li (Stanford U., Google Cloud) • http://vision.stanford.edu/teaching.html • Aaron Courville (U Montreal), Ian Goodfellow (Google Brain), and Yoshua Bengio (U Montreal) • http://www.deeplearningbook.org/ Outline 1. Convolutional Neural Networks • Development • Application • Architecture • Optimization 2. Deep Learning Software 3. TensorFlow Demos 1. Convolutional Neural Networks Convolutional Neural Networks (CNNs/ConvNets) • Convolution: a specialized type of linear operation • CNNs: a specialized type of neural network using the convolution (instead of general matrix multiplication) in at least one of its layers • CNNs are biologically-inspired to emulate the animal visual cortex Biological Motivation and Connections The dendrites in biological neurons perform complex nonlinear computations. The synapses are not just a single weight, they’re a complex non-linear system. The activation function takes the decision of whether or not to pass the signal. Activation Functions: Sigmoid/Logistic and Tanh • (-) Gradients at tails are almost zero (vanishing gradients) • (-) Outputs (y-axis) are not zero-centered for Sigmoid/Logistic • (-) Computationally expensive Rectified Linear Unit (ReLU) • (+) Accelerate the convergence of SGD compared to sigmoid/tanh • (+) Easy to implement by simply thresholding at zero • First introduced by Hahnloser et al. (Nature, 2000) with strong biological motivations and mathematical justifications • First demonstrated in 2011 to enable better training of deeper neural networks • The most popular activation as of 2018.
    [Show full text]
  • Generative Adversarial Nets
    Generative Adversarial Nets Ian J. Goodfellow, Jean Pouget-Abadie,∗ Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,y Aaron Courville, Yoshua Bengioz Departement´ d’informatique et de recherche operationnelle´ Universite´ de Montreal´ Montreal,´ QC H3C 3J7 Abstract We propose a new framework for estimating generative models via an adversar- ial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The train- ing procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data 1 distribution and D equal to 2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference net- works during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples. 1 Introduction The promise of deep learning is to discover rich, hierarchical models [2] that represent probability distributions over the kinds of data encountered in artificial intelligence applications, such as natural images, audio waveforms containing speech, and symbols in natural language corpora. So far, the most striking successes in deep learning have involved discriminative models, usually those that map a high-dimensional, rich sensory input to a class label [14, 22].
    [Show full text]
  • Ian Goodfellow, Openai Research Scientist NIPS 2016 Workshop on Adversarial Training Barcelona, 2016-12-9 Adversarial Training
    Introduction to Generative Adversarial Networks Ian Goodfellow, OpenAI Research Scientist NIPS 2016 Workshop on Adversarial Training Barcelona, 2016-12-9 Adversarial Training • A phrase whose usage is in flux; a new term that applies to both new and old ideas • My current usage: “Training a model in a worst-case scenario, with inputs chosen by an adversary” • Examples: • An agent playing against a copy of itself in a board game (Samuel, 1959) • Robust optimization / robust control (e.g. Rustem and Howe 2002) • Training neural networks on adversarial examples (Szegedy et al 2013, Goodfellow et al 2014) (Goodfellow 2016) Generative Adversarial Networks • Both players are neural networks • Worst case input for one network is produced by another network (Goodfellow 2016) Generative Modeling • Density estimation • Sample generation Training examples Model samples (Goodfellow 2016) Adversarial Nets Framework D tries to make D(G(z)) near 0, D(x) tries to be G tries to make near 1 D(G(z)) near 1 Differentiable D function D x sampled from x sampled from data model Differentiable function G Input noise z (Goodfellow 2016) BRIEF ARTICLE THE AUTHOR Maximum likelihood ✓⇤ = arg max Ex pdata log pmodel(x ✓) ✓ ⇠ | Fully-visible belief net n pmodel(x)=pmodel(x1) pmodel(xi x1,...,xi 1) | − Yi=2 Change of variables @g(x) y = g(x) p (x)=p (g(x)) det ) x y @x ✓ ◆ Variational bound (1)log p(x) log p(x) D (q(z) p(z x )) ≥ − KL k | (2)=Ez q log p(x, z)+H(q ) ⇠ Boltzmann Machines 1 (3)p(x)= exp ( E(x, z )) Z − (4)Z = exp ( E(x, z )) − x z X X Generator equation
    [Show full text]