Neural Human Video Rendering by Learning Dynamic Textures and Rendering-To-Video Translation

Neural Human Video Rendering by Learning Dynamic Textures and Rendering-To-Video Translation

1 Neural Human Video Rendering by Learning Dynamic Textures and Rendering-to-Video Translation Lingjie Liu, Weipeng Xu, Marc Habermann, Michael Zollhofer,¨ Florian Bernard, Hyeongwoo Kim, Wenping Wang, Christian Theobalt Abstract—Synthesizing realistic videos of humans using neural networks has been a popular alternative to the conventional graphics-based rendering pipeline due to its high efficiency. Existing works typically formulate this as an image-to-image translation problem in 2D screen space, which leads to artifacts such as over-smoothing, missing body parts, and temporal instability of fine-scale detail, such as pose-dependent wrinkles in the clothing. In this paper, we propose a novel human video synthesis method that approaches these limiting factors by explicitly disentangling the learning of time-coherent fine-scale details from the embedding of the human in 2D screen space. More specifically, our method relies on the combination of two convolutional neural networks (CNNs). Given the pose information, the first CNN predicts a dynamic texture map that contains time-coherent high-frequency details, and the second CNN conditions the generation of the final video on the temporally coherent output of the first CNN. We demonstrate several applications of our approach, such as human reenactment and novel view synthesis from monocular video, where we show significant improvement over the state of the art both qualitatively and quantitatively. Index Terms—Video-based Characters, Deep Learning, Neural Rendering, Learning Dynamic Texture, Rendering-to-Video Translation F 1 INTRODUCTION scanning or design, hand-design or motion capture of target Synthesizing realistic videos of humans is an important motions and deformations, and time-consuming photore- research topic in computer graphics and computer vision, alistic rendering. Aiming to streamline and expedite this which has a broad range of applications in visual effects process, in recent years graphics and vision researchers (VFX) and games, virtual reality (VR) and telepresence, AI developed data-driven methods to generate realistic images assistants, and many more. In this work, we propose a novel [1], [2], [3] and videos [4], [5], [6], [7], [8] of humans. Many machine learning approach for synthesizing a realistic video of these use variants of adversarially trained convolutional of an actor that is driven from a given motion sequence. neural networks to translate coarse conditioning inputs, Only a monocular video and a personalized template mesh which encode human appearance and/or pose, into photo- of the actor are needed as input. The motion of the actor realistic imagery. in the target video can be controlled in different ways. For A prominent problem with existing methods is that example by transferring the motion of a different actor in fine-scale details are often over-smoothed and temporally a source video, or by controlling the video footage directly incoherent, e.g. wrinkles often do not move coherently with based on an interactive handle-based editor. the garments but look like lying on a separated spatially Nowadays, the de-facto standard for creating video- fixed layer floating in the screen space (see the supple- arXiv:2001.04947v3 [cs.GR] 5 Jul 2021 realistic animations of humans follows the conventional mentary video). While some approaches try to address graphics-based human video synthesis pipeline based on these challenges by enforcing temporal coherence in the highly detailed animated 3D models. The creation of these adversarial training objective [4], [5], [6], we argue that involves multiple non-trivial, decoupled, manual and time- most problems are due to a combination of two limiting consuming steps: These include 3D shape and appearance factors: 1) Conditioning input is often a very coarse and sparse 2D or 3D skeleton pose rather than a more complete • This work was done when L. Liu was an intern at Max Planck Institute 3D human animation model. 2) Image translation is learned for Informatics. only in 2D screen space. This fails to properly disentangle • L. Liu and W. Wang are with the Department of Computer Science, appearance effects from residual image-space effects that are The University of Hong Kong, Hong Kong, P.R.China. E-mail: liul- [email protected], [email protected] best handled by 2D image convolutions. Since appearance • W. Xu, M. Habermann, F. Bernard, H. Kim, C. Theobalt are with effects are best described on the actual 3D body surface, the Graphics, Vision and Video Group at Max Planck Institute for they should be handled by suitable convolutions that take Informatics, 66123 Saarbr¨ucken,Germany. E-mail: [email protected], the manifold structure into account. As a consequence of [email protected], [email protected], hyeongwoo@mpi- inf.mpg.de, [email protected] these effects, networks struggle to jointly generate results • M. Zollh¨ofer is with the Department of Computer Science, Computer that show both, complete human body imagery without Graphics Laboratory at Stanford University, CA 94305, United States. missing body parts or silhouette errors, as well as plausible E-mail: [email protected] temporally coherent high-frequency surface detail. 2 Fig. 1. We present an approach for synthesizing realistic videos of humans. Our method allows for: a) motion transfer between a pair of monocular videos, b) interactively controlling the pose of a person in the video, and c) monocular bullet time effects, where we freeze time and virtually rotate the camera. We propose a new human video synthesis method that the actor’s appearance compared to existing state-of-the-art tackles these limiting factors and explicitly disentangles methods. learning of time-coherent pose-dependent fine-scale detail As shown in Figure 1, our approach can be utilized from the time-coherent pose-dependent embedding of the in various applications, such as human motion transfer, human in 2D screen space. Our approach relies on a monoc- interactive reenactment and novel view synthesis from ular training video of the actor performing various motions, monocular video. In our experiments, we demonstrate these and a skinned person-specific template mesh of the actor. applications and show that our approach is superior to the The latter is used to capture the shape and pose of the state of the art both qualitatively and quantitatively. actor in each frame of the training video using an off-the- Our main contributions are summarized as follows: shelf monocular performance capture approach. Our video synthesis algorithm uses a three-stage approach based on • A novel three-stage approach that disentangles learn- two CNNs and the computer graphics texturing pipeline: ing pose-dependent fine-scale details from the pose- 1) Given the target pose in each video frame encoded as dependent embedding of the human in 2D screen a surface normal map of the posed body template, the space. first CNN is trained to predict a dynamic texture map • High-resolution video synthesis of humans with that contains the pose-dependent and time-coherent high- controllable target motions and temporally coherent frequency detail. In this normalized texture space, local fine-scale detail. details such as wrinkles always appear at the same uv- location, since the rigid and articulated body motion is 2 RELATED WORK already factored out by the monocular performance capture algorithm, which significantly simplifies the learning task. In the following, we discuss human performance capture, This frees the network from the task of having to synthe- classical video-based rendering, and learning-based human size the body at the right screen space location, leading performance cloning, as well as the underlying image-to- to temporally more coherent and detailed results. 2) We image translation approaches based on conditional genera- apply the dynamic texture on top of the animated human tive adversarial networks. body model to render a video of the animation that exhibits Classical Video-based Characters. Classically, the do- temporally stable high-frequency surface details, but that main gap between coarse human proxy models and realistic lacks effects that cannot be explained by the rendered mesh imagery can be bridged using image-based rendering tech- alone. 3) Finally, our second CNN conditions the generation niques. These strategies can be used for the generation of of the final video on the temporally coherent output of the video-based characters [9], [10], [11], [12] and enable free- first CNN. This refinement network synthesizes foreground- viewpoint video [13], [14], [15], [16], [17]. Even relightable background interactions, such as shadows, naturally blends performances can be obtained [18] by disentangling illumi- the foreground and background, and corrects geometrical nation and scene reflectance. The synthesis of new body errors due to tracking/skinning errors, which might be motions and viewpoints around an actor is possible [9] with especially visible at the silhouettes. such techniques. Modeling Humans from Data. Humans can be mod- To the best of our knowledge, our approach is the eled from data using mesh-based 3D representations. For first dynamic-texture neural rendering approach for human example, parametric models for different body parts are bodies that disentangles human video synthesis into explicit widely employed [19], [20], [21], [22], [23], [24] in the lit- texture-space and image-space neural rendering steps: pose- erature. Deep Appearance Models [25] learn dynamic view- dependent neural texture generation and rendering to real- dependent texture maps for the human head. The paGAN istic video translation. This new problem formulation yields [26] approach builds a dynamic avatar from a single monoc- more accurate human video synthesis results, which better ular image. Recently, models of the entire human body preserve the spatial, temporal, and geometric coherence of have become popular [27], [28]. There are also some recent 3 works on cloth modeling [29], [30], [31]. One drawback of renderings of human bodies, which is a more challenging these models is that they do not model the appearance task.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    14 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us