Visual Attribute Transfer Through Deep Image Analogy ∗
Total Page:16
File Type:pdf, Size:1020Kb
Visual Attribute Transfer through Deep Image Analogy ∗ Jing Liao1, Yuan Yao2 ,y Lu Yuan1, Gang Hua1, and Sing Bing Kang1 1Microsoft Research, 2Shanghai Jiao Tong University : :: : : :: : A (input) A0 (output) B (output) B0 (input) Figure 1: Our technique allows us to establish semantically-meaningful dense correspondences between two input images A and B0. A0 and B are the reconstructed results subsequent to transfer of visual attributes. Abstract The applications of color transfer, texture transfer, and style transfer share a common theme, which we characterize as visual attribute We propose a new technique for visual attribute transfer across im- transfer. In other words, visual attribute transfer refers to copying ages that may have very different appearance but have perceptually visual attributes of one image such as color, texture, and style, to similar semantic structure. By visual attribute transfer, we mean another image. transfer of visual information (such as color, tone, texture, and style) from one image to another. For example, one image could In this paper, we describe a new technique for visual attribute trans- be that of a painting or a sketch while the other is a photo of a real fer for a pair of images that may be very different in appearance but scene, and both depict the same type of scene. semantically similar. In the context of images, by ”semantic”, we refer to high-level visual content involving identifiable objects. We Our technique finds semantically-meaningful dense correspon- deem two different images to be semantically similar if both are of dences between two input images. To accomplish this, it adapts the the same type of scene containing objects with the same names or notion of “image analogy” [Hertzmann et al. 2001] with features classes, e.g., nature scenes featuring pandas and foliage, headshots, extracted from a Deep Convolutional Neutral Network for match- or indoor scenes featuring dining rooms with tables, chairs, walls, ing; we call our technique deep image analogy. A coarse-to-fine and ceiling. Figure1 shows two semantically similar pairs. The im- strategy is used to compute the nearest-neighbor field for generating ages in each pair of inputs (A and B0) look dramatically different, arXiv:1705.01088v2 [cs.CV] 6 Jun 2017 the results. We validate the effectiveness of our proposed method in but have objects with similar identities. a variety of cases, including style/texture transfer, color/style swap, sketch/painting to photo, and time lapse. Our technique establishes semantically-meaningful dense corre- spondences between the input images, which allow effective vi- sual attribute transfer. Low-level matching methods, such as op- 1 Introduction tical flow [Brox et al. 2004] and PatchMatch [Barnes et al. 2009], are designed to match local intensities. Hence, they fail to match Many types of compelling image stylization effects have been under large visual variations. While other methods such as SIFT demonstrated over the years, including color transfer, texture trans- flow [Liu et al. 2011] or deep match [Weinzaepfel et al. 2013] are fer, and style transfer. Their appeal is especially strong in the con- more reliable in matching sparse features, they are also not able to text of social media, where photo sharing and entertainment are handle extreme visual variations, such as matching across an artis- important elements. A number of popular apps such as Prisma and tic painting and a real photograph. This is because these methods Facetune have successfully capitalized on this appeal. are still fundamentally based on low-level features. ∗Supplemental material: https://liaojing.github.io/ Methods have been proposed to match different particular domains, html/data/analogy_supplemental.pdf; source code: such as drawings/paintings to photographs [Russell et al. 2011], https://github.com/msracver/Deep-Image-Analogy. sketches to photos [Chen et al. 2009], and photos under different yThis work was done when Yuan Yao was an intern at MSR Asia. illuminants [Chong et al. 2008]. However, these methods typically are very domain-specific and do not easily generalize. Schechtman 2 Related Work and Irani [2007] propose a more general solution using local self- similarity descriptors that are invariant across visual domains, and Shrivastava et. al. [2011] consider relative weighting between the In this section, we review techniques that are related to visual at- descriptors for cross-domain image matching. Unfortunately, these tribute transfer (color, texture, and style transfer, as well as image methods do not produce dense correspondences between images of analogy). We also briefly discuss two other topics very relevant to different domains. our work: dense image correspondence and neural style transfer. We handle the dense correspondence problem using ideas related 2.1 Visual Attribute Transfer to image analogies [Hertzmann et al. 2001], which involve dense mappings between images from different domains. An image anal- ogy is codified as A : A0 :: B : B0, where B0 relates to B in the Much work has been done on the transfer of various visual attributes same way as A0 relates to A, and additionally, A and A0 (also B and (e.g., color, tone, texture, style) from one image to another, and B0) are in pixel-wise correspondences. In forming an image anal- we discuss only representative papers for brevity. Most of these ogy, typically A, A0, and either B or B0 are given, and the goal is approaches are, however, not general, in that they are designed to to solve for the sole remaining image. In contrast, for our scenario transfer a specific type of visual attribute. As we show later, our only a source image A and an example image B0 are given, and technique is designed to handle more types of visual attributes. both A0 and B represent latent images to be estimated, imposing a bidirectional constraint to better match A to B0. Color Transfer. Early color transfer techniques tend to be global, i.e., a global transformation is applied to a source image to match Solving the visual attribute transfer problem is equivalent to finding the color statistics of a reference image [Reinhard et al. 2001; Pitie both unknown images A0 and B. Instead of applying image analogy et al. 2005]. They work well when the source and reference im- to image pixels directly, we use a Deep Convolutional Neutral Net- ages are of similar scenes, even though the spatial layouts can be work (CNN) [Krizhevsky et al. 2012] to construct a feature space in dissimilar. Other methods incorporate user input [An and Pellacini which to form image analogies. It has been shown that such deep 2010; Welsh et al. 2002] or a large database [Dale et al. 2009; Laf- features are better representations for semantic structures [Zeiler font et al. 2014] to guide the color transfer. Local color transfer and Fergus 2014]. We call our new technique deep image analogy. methods infer local color statistics in different color regions by es- tablishing region correspondences [Tai et al. 2005]. More recently, Our approach leverages pre-trained CNNs for object recognition Claudio et al. [2012] transfer local color between regions with the (e.g., VGG-19 [Simonyan and Zisserman 2014]) to construct such same annotated class; in a similar vein, Wu et al. [2013] transfer the a feature space. A nice property of CNN representations is that color style across the same semantic regions. they gradually encode image information from low-level details to high-level semantic content. This provides a good decomposi- Texture Transfer. Most early texture transfer algorithms rely on tion of semantic structure and visual attributes for transfer. Be- non-parametric sampling for texture synthesis while using different sides, the spatial correspondence between intermediate feature lay- ways to preserve the structure of the target image. For instance, ers in CNN architectures is approximately maintained. Both prop- Efros and Freeman [2001] introduce a correspondence map that in- erties facilitate a coarse-to-fine strategy for computing the nearest- cludes features of the target image such as image intensity to con- neighbor field (NNF), a core component used in reconstructing im- strain the texture synthesis procedure. Ashikhmin [2003] focuses ages. To speed up the required nearest-neighbor search, we adapt on transferring high-frequency texture information while preserv- PatchMatch [Barnes et al. 2009] to the CNN feature domain. Our ing the coarse scale of the target image. Lee et al. [2010] im- method uses the bidirectional constraint in the patch similarity met- prove this algorithm by additionally augmenting texture transfer ric, which have previously been shown to work well on re-targeting with edge orientation information. problems [Simakov et al. 2008]. In the present context, the use of the bidirectional constraint introduces a useful symmetry and helps Style Transfer. Style transfer is used as a means to migrate an to mitigate the risk of mismatches. artistic style from an example image to a source image. The de- composition of content and style in artistic images is bound to the Our major technical contributions are: coupling between the source content and the example style. Zhang et. al. [2013] decompose the input images into three components: 1. We present a new method “deep image analogy”, which we draft, paint, and edge; the style is transferred from the template im- show to be effective for visual attribute transfer across images age to the source image through the paint and edge components. in very different domains. Frigo et. al. [2016] view style transfer as the composition of local texture transfer and global color transfer, and suggest a simple yet 2. We extend PatchMatch and reconstruction from the im- efficient adaptive image partition for the decomposition of style and age domain to the feature domain, which serves to guide content. Shih et. al. [2014] robustly transfer the local statistics of semantically-meaningful visual attribute transfer.