Unsupervised Learning

Unsupervised Learning

Deep Learning for Graphics Unsupervised Learning Niloy Mitra Iasonas Kokkinos Paul Guerrero Vladimir Kim Kostas Rematas Tobias Ritschel UCL UCL/Facebook UCL Adobe Research U Washington UCL Timetable Niloy Iasonas Paul Vova Kostas Tobias Introduction X X X X Theory X NN Basics X X Supervised Applications X X Data X Unsupervised Applications X Beyond 2D X X Outlook X X X X X X EG Course “Deep Learning for Graphics” 2 Unsupervised Learning • There is no direct ground truth for the quantity of interest • Autoencoders • Variational Autoencoders (VAEs) • Generative Adversarial Networks (GANs) EG Course “Deep Learning for Graphics” Autoencoders Goal: Meaningful features that capture the main factors of variation in the dataset • These are good for classification, clustering, exploration, generation, … • We have no ground truth for them Features Encoder Input data EG Course “Deep Learning for Graphics” Slide Credit: Fei-Fei Li, Justin Johnson, Serena Yeung, CS 231n Autoencoders Goal: Meaningful features that capture the main factors of variation Features that can be used to reconstruct the image Decoder Features L2 Loss function: (Latent variables) Encoder Input data EG Course “Deep Learning for Graphics” Slide Credit: Fei-Fei Li, Justin Johnson, Serena Yeung, CS 231n Autoencoders Linear Transformation for Encoder and Decoder give result close to PCA Deeper networks give better reconstructions, since basis can be non-linear Original Autoencoder PCA EG Course “Deep Learning for Graphics” Image Credit: Reducing the Dimensionality of Data with Neural Networks, . Hinton and Salakhutdinov Example: Document Word Prob. → 2D Code LSA (based on PCA) Autoencoder EG Course “Deep Learning for Graphics” Image Credit: Reducing the Dimensionality of Data with Neural Networks, Hinton and Salakhutdinov Example: Semi-Supervised Classification • Many images, but few ground truth labels start unsupervised supervised fine-tuning train autoencoder on many images train classification network on labeled images Loss function (Softmax, etc) Predicted Label GT Label Decoder Classifier L2 Loss function: Features Features (Latent Variables) Encoder Encoder Input data EG Course “Deep Learning for Graphics” Slide Credit: Fei-Fei Li, Justin Johnson, Serena Yeung, CS 231n Code example Autoencoder (autoencoder.ipynb) 9 Generative Models • Assumption: the dataset are samples from an unknown distribution • Goal: create a new sample from that is not in the dataset … ? Dataset Generated EG Course “Deep Learning for Graphics” Image credit: Progressive Growing of GANs for Improved Quality, Stability, and Variation, Karras et al. Generative Models • Assumption: the dataset are samples from an unknown distribution • Goal: create a new sample from that is not in the dataset … Dataset Generated EG Course “Deep Learning for Graphics” Image credit: Progressive Growing of GANs for Improved Quality, Stability, and Variation, Karras et al. Generative Models Generator with parameters known and easy to sample from EG Course “Deep Learning for Graphics” Generative Models How to measure similarity of and ? 1) Likelihood of data in Generator with parameters Variational Autoencoders (VAEs) 2) Adversarial game: Discriminator distinguishes Generator makes it known and vs easy to sample from and hard to distinguish Generative Adversarial Networks (GANs) EG Course “Deep Learning for Graphics” Autoencoders as Generative Models? • A trained decoder transforms some features Decoder = Generator? to approximate samples from • What happens if we pick a random ? • We do not know the distribution of random features that decode to likely samples Feature space / latent space EG Course “Deep Learning for Graphics” Image Credit: Reducing the Dimensionality of Data with Neural Networks, Hinton and Salakhutdinov Variational Autoencoders (VAEs) • Pick a parametric distribution for features • The generator maps to an image Generator with distribution (where are parameters) parameters sample • Train the generator to maximize the likelihood of the data in : EG Course “Deep Learning for Graphics” Outputting a Distribution Normal distribution Bernoulli distribution Generator with Generator with parameters parameters sample sample EG Course “Deep Learning for Graphics” Variational Autoencoders (VAEs): Naïve Sampling (Monte-Carlo) • SGD approximates the expected values over samples • In each training iteration, sample from … • … and randomly from the dataset, and maximize: EG Course “Deep Learning for Graphics” Variational Autoencoders (VAEs): Naïve Sampling (Monte-Carlo) Loss function: Generator with Random from dataset parameters • In each training iteration, sample from … • … and randomly from the dataset sample • SGD approximates the expected values over samples EG Course “Deep Learning for Graphics” Variational Autoencoders (VAEs): Naïve Sampling (Monte-Carlo) Loss function: Generator with Random from dataset parameters • In each training iteration, sample from … • … and randomly from the dataset sample • SGD approximates the expected values over samples • Few pairs have non-zero gradients with non-zero loss gradient for EG Course “Deep Learning for Graphics” Variational Autoencoders (VAEs): The Encoder Loss function: Generator with • During training, another network can guess a parameters good for a given • should be much smaller than sample • This also gives us the data point Encoder with parameters EG Course “Deep Learning for Graphics” Variational Autoencoders (VAEs): The Encoder Loss function: Generator with • Can we still easily sample a new ? parameters • Need to make sure approximates • Regularize with KL-divergence sample Encoder with • Negative loss can be shown to be a lower bound parameters for the likelihood, and equivalent if EG Course “Deep Learning for Graphics” Reparameterization Trick Example when : , where Generator with parameters Backprop? Backprop sample Encoder with Encoder with parameters sample parameters Does not depend on parameters EG Course “Deep Learning for Graphics” Generating Data MNIST Frey Faces sample Generator with parameters sample EG Course “Deep Learning for Graphics” Image Credit: Auto-Encoding Variational Bayes, Kingma and Welling Demos VAE on MNIST http://dpkingma.com/sgvb_mnist_demo/demo.html VAE on Faces http://vdumoulin.github.io/morphing_faces/online_demo.html 24 Code example Variational Autoencoder (variational_autoencoder.ipynb) 25 Generative Adversarial Networks Player 1: generator Player 2: discriminator real/fake Scores if discriminator Scores if it can distinguish can’t distinguish output between real and fake from real image from dataset EG Course “Deep Learning for Graphics” Naïve Sampling Revisited Loss function: Generator with Random from dataset parameters • Few pairs have non-zero gradients sample • This is a problem of the maximum likelihood • Use a different loss: Train a discriminator network to measure similarity with non-zero loss gradient for EG Course “Deep Learning for Graphics” Why Adversarial? • If discriminator approximates : • at maximum of has lowest loss : discriminator with parameters • Optimal has single mode at , small variance : generator with parameters sample EG Course “Deep Learning for Graphics” Image Credit: How (not) to Train your Generative Model: Scheduled Sampling, Likelihood, Adversary?, Ferenc Huszár Why Adversarial? • For GANs, the discriminator instead approximates: : discriminator with parameters depends on the generator : generator with parameters sample EG Course “Deep Learning for Graphics” Image Credit: How (not) to Train your Generative Model: Scheduled Sampling, Likelihood, Adversary?, Ferenc Huszár Why Adversarial? VAEs: GANs: Maximize likelihood of Maximize likelihood of Adversarial game generator samples in data samples in approximate EG Course “Deep Learning for Graphics” Image Credit: How (not) to Train your Generative Model: Scheduled Sampling, Likelihood, Adversary?, Ferenc Huszár Why Adversarial? VAEs: GANs: Maximize likelihood of Maximize likelihood of Adversarial game generator samples in data samples in approximate EG Course “Deep Learning for Graphics” Image Credit: How (not) to Train your Generative Model: Scheduled Sampling, Likelihood, Adversary?, Ferenc Huszár GAN Objective probability that is not fake :discriminator fake/real classification loss (BCE): Discriminator objective: :generator Generator objective: sample EG Course “Deep Learning for Graphics” Non-saturating Heuristic Generator loss is negative binary cross-entropy: poor convergence Negative BCE EG Course “Deep Learning for Graphics” Image Credit: NIPS 2016 Tutorial: Generative Adversarial Networks, Ian Goodfellow Non-saturating Heuristic Generator loss is negative binary cross-entropy: poor convergence Flip target class instead of flipping the sign for generator loss: good convergence – like BCE Negative BCE BCE with flipped target EG Course “Deep Learning for Graphics” Image Credit: NIPS 2016 Tutorial: Generative Adversarial Networks, Ian Goodfellow GAN Training Loss: Loss: :discriminator :discriminator :generator from dataset Generator training Generator Discriminator Discriminator training Interleave in each training step sample EG Course “Deep Learning for Graphics” DCGAN • First paper to successfully use CNNs with GANs • Due to using novel components (at that time) like batch norm., ReLUs, etc. EG Course “Deep Learning for Graphics” Image Credit: Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks, Radford et al. InfoGAN varying :discriminator :generator maximize mutual information

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    56 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us