Multi-Style Transfer: Generalizing Fast Style Transfer to Several Genres

Multi-Style Transfer: Generalizing Fast Style Transfer to Several Genres

Multi-style Transfer: Generalizing Fast Style Transfer to Several Genres Brandon Cui Calvin Qi Aileen Wang Stanford University Stanford University Stanford University [email protected] [email protected] [email protected] Abstract 1.1. Related Work A number of research works have used optimization to This paper aims to extend the technique of fast neural generate images depending on high-level features extracted style transfer to multiple styles, allowing the user to trans- from a CNN. Images can be generated to maximize class fer the contents of any input image into an aggregation of prediction scores [17, 18] or individual features [18]. Ma- multiple styles. We first implement single-style transfer: we hendran and Vedaldi [19] invert features from CNN by train our fast style transfer network, which is a feed-forward minimizing a feature reconstruction loss; similar methods convolutional neural network, over the Microsoft COCO had previously been used to invert local binary descriptors Image Dataset 2014, and we connect this transformation [20, 21] and HOG features [22]. The work of Dosovit- network to a pre-trained VGG16 network (Frossard). After skiy and Brox [23] was to train a feed-forward neural net- training on a desired style (or combination of them), we can work to invert convolutional and approximate a solution to input any desired image and have it rendered in this new vi- the optimization problem posed by [19]. But their feed- sual genre. We also add improved upsampling and instance forward network is trained with a per-pixel reconstruction normalization to the original networks for improved visual loss. Johnson et al.[7] has directly optimized the feature quality. Second, we extend style transfer to multiple styles, reconstruction loss of [19]. by training the network to learn parameters that will blend The use of Neural Networks for style transfer in images the weights. From our work we demonstrate similar results saw its advent in Gatys et al., 2015 [3, 4, 5, 6]. This intro- to previously seen single-style transfer, and promising pre- duced a technique for taking the contents of an image and liminary results for multi-style transfer. rendering it in the ‘style’ of another, including visual fea- tures such as texture, color scheme, lighting, contrast, etc. The result was at once visually stunning and technically in- triguing, so in recent years many others have worked on 1. Introduction refining the technique to make it more accurate, efficient, and customizable. Earlier style transfer algorithms have the fundamental Johnson et al. 2016 [7, 8, 9] proposed a framework that limitation of only using low-level image features of the tar- includes a new specialized ‘style transfer network’ work- get image to inform the style transfer [1, 2]. Only with re- ing in conjunction with a general CNN for image classifica- cent advancements in Deep Convolutional Neural Networks tion, which allows for the simultaneous understanding of an (CNN) have we seen new powerful computer vision sys- style and content in images so that they can be analyzed and tems that learn to extract high-level semantic information transferred. This method is well documented and produces from natural images for artistic style classification. In re- very good results, but the method still has drawbacks in per- cent years we’ve seen the advent of Neural Style Transfer formance and in being limited to learning a style from just due to the intriguing visual results of being able to render one image and producing a single pre-trained style network. images in a style of choice. Many current implementations Our goal in this project is first to understand the ex- of style transfer are well documented and produce good re- isting implementations of style transfer and the advan- sults using CNN, but have drawbacks in performance and tages/disadvantages of its many variations, then to devise a are limited to learning a style from just one image and pro- method extending one of these implementations so that the ducing a single pre-trained style network. We hope to im- algorithm can have a more holistic understanding of ‘style’ plement style transfer in a more generalized form that is fast that incorporates multiple images from a certain genre/artist to run and also capable of intelligently combining various rather than just one, and finally to implement our method styles. fully and seek to optimize performance and accuracy along 1 the way. our algorithm to consolidate style transfer into a series of operation that can be applied to an image instantly. 2. Problem Definition 4.2. Fast Style Transfer Architecture (Johnson et al. The goal of our project is to: 2016) • Implement the most primitive form of Style Transfer We design a feed-forward CNN that takes in an image based on iteratively updating an image and altering it and outputs one of the same size after a series of interme- to fit a desired balance of style and content. This can be diate layers, with the output as the result of converting the framed as an optimization problem with a loss function original image to a chosen style. The network begins with as our objective to minimize through backpropagation padding and various convolution layers to apply filters to onto the image itself. spatial regions of our input image, grouped with batch nor- malization and ReLU nonlinearity. These are followed by • Improve upon the naive approach by implementing and the same arrangement of layers but as a residual block, since training a feed forward Style Transfer network that we estimate that parts of the original image only need to be learns a particular style and can convert any image to perturbed slightly from their original pixels. Then, upsam- the style with a single forward pass (Johnson) pling is needed to restore the matrices into proper image • Generalize this network to aggregate multiple styles dimensions. (We standardized the dimensions to 256×256, and produce find the best combination of them with- but this can be customized.) Our initial implementation fol- out any manual specifications from the user lows Johnson’s method of using fractional (transpose) con- volution layers with stride 1=2 for upsampling, which gets • Compare the results of these different approaches by the job done but leads to some minor undesirable visual ar- analyzing both the visual qualities of the resulting im- tifacts that will be addressed and improved next. ages and numerical loss values We connect this transformation layer to feed directly into a pre-trained VGG16 network (Frossard), which we use 3. Data Processing as a feature extractor that has already proven its effective- ness. This presents us with many choices regarding which We will choose different type of datasets to test and val- layer(s) of the VGG network to select to represent image idate our multi-style transfer algorithm: features and styles. In addition, since total style loss is a • SqueezeNet for naive style transfer baseline weighted sum of the style losses at different layers, we need to decide how much to weigh each. • VGG-16 and associated pre-trained ImageNet weights After much experimentation, we chose the ‘relu2 2‘ for loss network layer for features since it yielded reconstructions that most contained both broad and specific visual contents of the • Microsoft COCO Image Dataset 2014 (80,000 images) original image. The style layers were taken to be for full training of our transfer network [‘relu1 2’, ‘relu2 2’, ‘relu3 3’, and ‘relu4 3’] 4. Approaches with weights [4, 1, 0.1, 0.1] respectively to capture a variety 4.1. Baseline (Gayts et al. 2015) of high and low level image qualities. The baseline implementation iteratively optimizes an Our total loss function is defined by: output image (can start from blank pixels or random noise) L = λ L + λ L + λ L and over time reaches a picture capturing the contents of c c s s tv tv one image in the style of another. It seeks to optimize a where these represent content, style, and total variation loss loss value that is a weighted sum of various Perceptual Loss respectively, each with scaling weights as hyperparameters. Functions that allow us to mathematically compare the vi- These individual loss functions are described in much more sual qualities of images. The details of these loss functions detail below; essentially, content corresponds to the ac- will be described in a later section. tual subject matter of the image, style represents the way While this method sufficiently accomplishes the basic it looks, and total variation measures similarity between task of transferring styles, it has various shortcomings. Pri- neighboring pixels as a method for reducing noise. Af- marily, it requires iteratively perturbing every input image ter extensive hyperparameter searching and tuning, we’ve through backpropagation, which is very slow. This also found that in our implementation the best values are typi- does not lead to an understanding of what exactly takes cally around place in this transformation and merely runs as an separate −4 −9 optimization problem each time. It would be beneficial for λc = 1:5; λs = 5 · 10 ; λtv = 3 · 10 2 We implemented the network in TensorFlow and trained it on 80,000 images from the Microsoft COCO 2014 dataset. Using minibatches of size 4 with two total epochs, training time is around 6 hours. The resulting style transfer network can stylize images in less than a second, which is much faster than naive style transfer (See Figure 1 for the fast style transfer Architec- ture). However, it has the limitation of only being able to handle one chosen style fixed from the start. x‘ c) Figure 1: Neural Network Architecture for Style Transfer a) Image Transform Net b) Residual Connections c) Loss Network (Johnson et al.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us