Visiting the Invisible: Layer-By-Layer Completed Scene Decomposition

Visiting the Invisible: Layer-By-Layer Completed Scene Decomposition

Visiting the Invisible: Layer-by-Layer Completed Scene Decomposition Chuanxia Zheng · Duy-Son Dao · Guoxian Song · Tat-Jen Cham · Jianfei Cai Abstract Existing scene understanding systems mainly fo- 1 Introduction cus on recognizing the visible parts of a scene, ignoring the intact appearance of physical objects in the real-world. Con- The vision community has made rapid advances in scene un- currently, image completion has aimed to create plausible derstanding tasks, such as object classification and localiza- appearance for the invisible regions, but requires a manual tion (Girshick et al., 2014; He et al., 2015; Ren et al., 2015), mask as input. In this work, we propose a higher-level scene scene parsing (Badrinarayanan et al., 2017; Chen et al., 2017; understanding system to tackle both visible and invisible Long et al., 2015), instance segmentation (Chen et al., 2019; parts of objects and backgrounds in a given scene. Partic- He et al., 2017; Pinheiro et al., 2015), and layered scene de- ularly, we built a system to decompose a scene into individ- composition (Gould et al., 2009; Yang et al., 2010; Zhang ual objects, infer their underlying occlusion relationships, et al., 2015). Despite their impressive performance, these and even automatically learn which parts of the objects are systems deal only with visible parts of scenes without trying occluded that need to be completed. In order to disentan- to exploit invisible regions, which results in an uncompleted gle the occluded relationships of all objects in a complex representation of real objects. scene, we use the fact that the front object without being In parallel, significantly progress for the generation task occluded is easy to be identified, detected, and segmented. has been made with the emergence of deep generative net- Our system interleaves the two tasks of instance segmenta- works, such as GAN-based models (Goodfellow et al., 2014; tion and scene completion through multiple iterations, solv- Gulrajani et al., 2017; Karras et al., 2019), VAE-based mod- ing for objects layer-by-layer. We first provide a thorough els (Kingma and Welling, 2014; Vahdat and Kautz, 2020; experiment using a new realistically rendered dataset with Van Den Oord et al., 2017), and flow-based models (Dinh ground-truths for all invisible regions. To bridge the domain et al., 2014, 2017; Kingma and Dhariwal, 2018). Empow- gap to real imagery where ground-truths are unavailable, we ered by these techniques, image completion (Iizuka et al., then train another model with the pseudo-ground-truths gen- 2017; Yu et al., 2018; Zheng et al., 2019) and object comple- erated from our trained synthesis model. We demonstrate re- tion (Ehsani et al., 2018; Ling et al., 2020; Zhan et al., 2020) sults on a wide variety of datasets and show significant im- have made it possible to create the plausible appearances for provement over the state-of-the-art. The code will be avail- occluded objects and backgrounds. However, these systems able at https://github.com/lyndonzheng/VINV. depend on manual masks or visible ground-truth masks as arXiv:2104.05367v1 [cs.CV] 12 Apr 2021 input, rather than automatically understand the full scene. Keywords Layered scene decomposition · Scene comple- tion · Amodal instance segmentation · Instance depth order · In this paper, we aim to build a system that has the abil- Scene recomposition. ity to decompose a scene into individual objects, infer their underlying occlusion relationships, and moreover imagine what occluded objects may look like, while using only an Chuanxia Zheng · Guoxian Song · Tat-Jen Cham image as input. This novel task involves the classical recog- School of Computer Science and Engineering, Nanyang Technological nition task of instance segmentation to predict the geometry University, Singapore. and category of all objects in a scene, and the generation task E-mail: [email protected], [email protected] [email protected] of image completion to reconstruct invisible parts of objects and backgrounds. Duy-Son Dao · Jianfei Cai Department of Data Science & AI, Monash University, Australia. To decompose a scene into instances with completed ap- E-mail: [email protected], [email protected] pearances in one pass is extremely challenging. This is be- 2 Chuanxia Zheng et al. ... 11 7 5 8 4 12 6 3 10 1 2 9 11 5 12 6 10 3 9 8 (a) Input (b) Layered Completed Scene Decomposition (c) Pairwise Order (d) Image Recomposition Fig. 1 Example results of scene decomposition and recomposition. (a) Input. (b) Our model structurally decomposes a scene into individual completed objects. Red rectangles highlight the original invisible parts. (c) The inferred pairwise order (top graph) and edited order (bottom graph) of the instances. Blue nodes indicate the deleted objects while the red node is the moved object. (d) The new recomposed scene. cause realistic natural scenes often consist of a vast collec- quantitatively outperforming existing methods in completed tion of physical objects, with complex scene structure and scene decomposition, in terms of instance segmentation, depth occlusion relationships, especially when one object is oc- ordering, and amodal mask and content completion. To fur- cluded by multiple objects, or when instances have deep hi- ther demonstrate the generalization of our system, we extend erarchical occlusion relationships. it to real datasets. As there is no ground truth annotations Our core idea is from the observation that it is much eas- and appearance available for training, we created pseudo- ier to identify, detect and segment foreground objects than ground-truths for real images using our model that is purely occluded objects. Motivated by this, we propose a Completed trained on CSD, and then fine-tuned this model accordingly. Scene Decomposition Network (CSDNet) that learns to seg- This model outperforms state-of-the-art methods (Qi et al., ment and complete each object in a scene layer-by-layer 2019; Zhan et al., 2020; Zhu et al., 2017) on amodal instance consecutively. As shown in Fig. 1, our layered scene decom- segmentation and depth ordering tasks, despite these meth- position network only segments the fully visible objects out ods being specialized to their respective tasks rather than our in each layer (Fig. 1(b)). If the system is able to properly holistic completed scene decomposition task. segment the foreground objects, it will automatically learn which parts of occluded objects are actually invisible that In summary, we propose a layer-by-layer scene decom- need to be filled in. The completed image is then passed position network that jointly learns structural scene decom- back to the layered scene decomposition network, which position and completion, rather than treating them separately can again focus purely on detecting and segmenting visible as the existing works (Dhamo et al., 2019; Ehsani et al., objects. As the interleaving proceeds, a structured instance 2018; Zhan et al., 2020). To our knowledge, it is the first depth order (Fig. 1(c)) is progressively derived by using the work that proposes to complete objects based on the global inferred absolute layer order. The thorough decomposition context, instead of tackling each object independently. To of a scene along with spatial relationships allows the system address this novel task, we render a high-quality rendered to freely recompose a new scene (Fig. 1(d)). dataset with ground-truth for all instances. We then pro- vide a thorough ablation study using this rendered dataset, in Another challenge in this novel task is the lack of data: which we demonstrate that the method substantially outper- there is no complex, realistic dataset that provides intact forms existing methods that address the task in isolation. On ground-truth appearance for originally occluded objects and real images, we improve the performance to the recent state- backgrounds in a scene. While latest works (Li and Malik, of-the-art methods by using pseudo-ground-truth as weakly- 2016; Zhan et al., 2020) introduced a self-supervised way to supervised labels. The experimental results show that our tackle the amodal completion using only visible annotations, CSDNet is able to acquire a full decomposition of a scene, they can not do a fair quantitative comparison as no real with only an image as input, which conduces to a lot of ap- ground-truths are available. To mitigate this issue, we con- plications, e.g. object-level image editing. structed a high-quality rendered dataset, named Completed Scene Decomposition (CSD), based on more than 2k indoor The rest of the paper is organized as follows. We discuss rooms. Unlike the datasets in (Dhamo et al., 2019; Ehsani the related work in Section 2, and describe our layer-by- et al., 2018), our dataset is designed to have more typical layer CSDNet in detail in Section 3. In Section 4 we present camera viewpoints, with near-realistic appearance. our rendered dataset. We then show the experiment results As elaborated in Section 5.2, the proposed system per- on this synthetic dataset as well as the results on real-world forms well on this rendered dataset, both qualitatively and images in Section 5, followed by a conclusion in Section 6. Visiting the Invisible: Layer-by-Layer Completed Scene Decomposition 3 Table 1 Comparison with related work based on three aspects: out- with depth ordering. While these methods evaluate occlu- puts, inputs and data. I: image, In: inmodal segmentation, O: occlusion sion ordering, their main goal is to improve the inmodal per- order, SP: scene parsing, AB: amodal bounding box, AS: amodal sur- ception accuracy for object detection, image parsing, or in- face, A: amodal segmentation, D: depth, IRGB: intact RGB object. stance segmentation using the spatial occlusion information. Paper Outputs Inputs Data In contrast to these methods, our method not only focuses on SP, O I LabelMe, PASVOC, others visible regions with structural inmodal perception, but also (Yang et al., 2011) In, O I PASVOC (Tighe et al., 2014) SP, O I LabelMe, SUN tries to solve for amodal perception.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    20 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us