The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) Human Synthesis and Scene Compositing Mihai Zanfir,3,1 Elisabeta Oneata,3,1 Alin-Ionut Popa,3,1 Andrei Zanfir,1,3 Cristian Sminchisescu1,2 1Google Research 2Department of Mathematics, Faculty of Engineering, Lund University 3Institute of Mathematics of the Romanian Academy {mihai.zanfir, elisabeta.oneata, alin.popa, andrei.zanfir}@imar.ro,
[email protected] Abstract contrast, other approaches avoid the 3d modeling pipeline altogether, aiming to achieve realism by directly manipulat- Generating good quality and geometrically plausible syn- ing images and by training using large-scale datasets. While thetic images of humans with the ability to control appear- ance, pose and shape parameters, has become increasingly this is cheap and attractive, offering the advantage of produc- important for a variety of tasks ranging from photo editing, ing outputs with close to real statistics, they are not nearly fashion virtual try-on, to special effects and image compres- as controllable as the 3d graphics ones, and results can be sion. In this paper, we propose a HUSC (HUman Synthesis geometrically inconsistent and often unpredictable. and Scene Compositing) framework for the realistic synthe- In this work we attempt to combine the relatively accessi- sis of humans with different appearance, in novel poses and ble methodology in both domains and propose a framework scenes. Central to our formulation is 3d reasoning for both that is able to realistically synthesize a photograph of a per- people and scenes, in order to produce realistic collages, by correctly modeling perspective effects and occlusion, by tak- son, in any given pose and shape, and blend it veridically ing into account scene semantics and by adequately handling with a new scene, while obeying 3d geometry and appear- relative scales.