Intuitive, Interactive Beard and Hair Synthesis with Generative Models

Intuitive, Interactive Beard and Hair Synthesis with Generative Models

Intuitive, Interactive Beard and Hair Synthesis with Generative Models Kyle Olszewski13, Duygu Ceylan2, Jun Xing35, Jose Echevarria2, Zhili Chen27, Weikai Chen36, and Hao Li134 1University of Southern California, 2Adobe Inc., 3USC ICT, 4Pinscreen Inc., 5miHoYo, 6Tencent America, 7ByteDance Research {olszewski.kyle,duygu.ceylan,junxnui}@gmail.com [email protected] {iamchenzhili,chenwk891}@gmail.com [email protected] Abstract We present an interactive approach to synthesizing real- istic variations in facial hair in images, ranging from subtle edits to existing hair to the addition of complex and chal- lenging hair in images of clean-shaven subjects. To cir- cumvent the tedious and computationally expensive tasks of input photograph mask and strokes facial hair synthesis modeling, rendering and compositing the 3D geometry of the target hairstyle using the traditional graphics pipeline, Figure 1: Given a target subject image, a masked region in we employ a neural network pipeline that synthesizes real- which to perform synthesis, and a set of strokes of varying istic and detailed images of facial hair directly in the tar- colors provided by the user, our approach interactively syn- get image in under one second. The synthesis is controlled thesizes hair with the appropriate structure and appearance. by simple and sparse guide strokes from the user defining the general structural and color properties of the target cial hair synthesis would also prove useful in improving hairstyle. We qualitatively and quantitatively evaluate our face-swapping technology such as Deepfakes for subjects chosen method compared to several alternative approaches. with complex facial hair. We show compelling interactive editing results with a proto- One approach would be to infer the 3D geometry and ap- type user interface that allows novice users to progressively pearance of any facial hair present in the input image, then refine the generated image to match their desired hairstyle, manipulate or replace it as as desired before rendering and and demonstrate that our approach also allows for flexible compositing into the original image. However, single view and high-fidelity scalp hair synthesis. 3D facial reconstruction is in itself an ill-posed and under constrained problem, and most state-of-the-art approaches struggle in the presence of large facial hair, and rely on para- metric facial models which cannot accurately represent such 1. Introduction structures. Furthermore, even state-of-the-art 3D hair ren- The ability to create and edit realistic facial hair in im- dering methods would struggle to provide sufficiently real- ages has several important, wide-ranging applications. For istic results quickly enough to allow for interactive feedback example, law enforcement agencies could provide multi- for users exploring numerous subtle stylistic variations. ple images portraying how missing or wanted individuals One could instead adopt a more direct and naive ap- would look if they tried to disguise their identity by grow- proach, such as copying regions of facial hair from exem- ing a beard or mustache, or how such features would change plar images of a desired style into the target image. How- over time as the subject aged. Someone considering grow- ever, it would be extremely time-consuming and tedious ing or changing their current facial hair may want to pre- to either find appropriate exemplars matching the position, visualize their appearance with a variety of potential styles perspective, color, and lighting conditions in the target im- without making long-lasting changes to their physical ap- age, or to modify these properties in the selected exemplar pearance. Editing facial hair in pre-existing images would regions so as to assemble them into a coherent style match- also allow users to enhance their appearance, for example ing both the target image and the desired hairstyle. in images used for their social media profile pictures. In- In this paper we propose a learning-based interactive ap- sights into how to perform high-quality and controllable fa- proach to image-based hair editing and synthesis. We ex- 7446 ploit the power of generative adversarial networks (GANs), result and generate plausible compositions of the generated which have shown impressive results for various image edit- hair within the input image. ing tasks. However, a crucial choice in our task is the input The success of such a learning-based method depends to the network guiding the synthesis process used during on the availability of a large-scale training set that covers training and inference. This input must be sufficiently de- a wide range of facial hairstyles. To our knowledge, no tailed to allow for synthesizing an image that corresponds to such dataset exists, so we fill this void by creating a new the user’s desires. Furthermore, it must also be tractable to synthetic dataset that provides variation along many axes obtain training data and extract input closely corresponding such as the style, color, and viewpoint in a controlled man- to that provided by users, so as to allow for training a gen- ner. We also collect a smaller dataset of real facial hair im- erative model to perform this task. Finally, to allow novice ages we use to allow our method to better generalize to real artists to use such a system, authoring this input should be images. We demonstrate how our networks can be trained intuitive, while retaining interactive performance to allow using these datasets to achieve realistic results despite the for iterative refinement based on realtime feedback. relatively small amount of real images used during training. A set of sketch-like “guide strokes” describing the local We introduce a user interface with tools that allow for in- shape and color of the hair to be synthesized is a natural tuitive creation and manipulation of the vector fields used to way to represent such input that corresponds to how hu- generate the input to our synthesis framework. We conduct mans draw images. Using straightforward techniques such comparisons to alternative approaches, as well as extensive as edge detection or image gradients would be an intuitive ablations demonstrating the utility of each component of approach to automatically extract such input from training our approach. Finally, we perform a perceptual study to images. However, while these could roughly approximate evaluate the realism of images authored using our approach, the types of strokes that a user might provide when draw- and a user study to evaluate the utility of our proposed user ing hair, we seek to find a representation that lends itself interface. These results demonstrate that our approach is to intuitively editing the synthesis results without explicitly indeed a powerful and intuitive approach to quickly author erasing and replacing each individual stroke. realistic illustrations of complex hairstyles. Consider a vector field defining the dominant local ori- entation across the region in which hair editing and synthe- 2. Related Work sis is to be performed. This is a natural representation for Texture Synthesis As a complete review of example- complex structures such as hair, which generally consists of based texture synthesis methods is out of the scope of strands or wisps of hair with local coherence, which could this paper, we refer the reader to the surveys of [65, 2] easily be converted to a set of guide strokes by integrating for comprehensive overviews of modern texture synthe- the vector field starting from randomly sampled positions in sis techniques. In terms of methodology, example-based the input image. However, this representation provides ad- texture synthesis approaches can be mainly categorized ditional benefits that enable more intuitive user interaction. into pixel-based methods [66, 19], stitching-based methods By extracting this vector field from the original facial hair [18, 40, 42], optimization-based approaches [39, 26, 68, 35] in the region to be edited, or by creating one using a small and appearance-space texture synthesis [44]. Close to our number of coarse brush strokes, we could generate a dense work, Lukácˇ et al.[48] present a method that allows users set of guide strokes from this vector field that could serve as to paint using the visual style of an arbitrary example tex- input to the network for image synthesis. Editing this vec- ture. In [47], an intuitive editing tool is developed to support tor field would allow for adjusting the overall structure of example-based painting that globally follows user-specified the selected hairstyle (e.g., making a straight hairstyle more shapes while generating interior content that preserves the curly or tangled, or vice versa) with relatively little user in- textural details of the source image. This tool is not specif- put, while still synthesizing a large number of guide strokes ically designed for hair synthesis, however, and thus lacks corresponding to the user’s input. As these strokes are used local controls that users desire, as shown by our user study. as the final input to the image synthesis networks, subtle lo- cal changes to the shape and color of the final image can be Recently, many researchers have attempted to leverage accomplished by simply editing, adding or removing indi- neural networks for texture synthesis [23, 45, 52]. How- vidual strokes. ever, it remains nontrivial for such techniques to accomplish simple editing operations, e.g. changing the local color or We carefully chose our network architectures and train- structure of the output, which are necessary in our scenario. ing techniques to allow for high-fidelity image synthesis, tractable training with appropriate input data, and inter- active performance. Specifically, we propose a two-stage Style Transfer The recent surge of style transfer research pipeline. While the first stage focuses on synthesizing real- suggests an alternate approach to replicating stylized fea- istic facial hair, the second stage aims to refine this initial tures from an example image to a target domain [22, 33, 7447 46, 58].

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    11 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us