Stacked Deep Multi-Scale Hierarchical Network for Fast Bokeh Effect Rendering from a Single Image

Stacked Deep Multi-Scale Hierarchical Network for Fast Bokeh Effect Rendering from a Single Image

Stacked Deep Multi-Scale Hierarchical Network for Fast Bokeh Effect Rendering from a Single Image Saikat Dutta Sourya Dipta Das Nisarg A. Shah IIT Madras Jadavpur University IIT Jodhpur Chennai, India Kolkata, India Jodhpur, India [email protected] [email protected] [email protected] Anil Kumar Tiwari IIT Jodhpur Jodhpur, India [email protected] Abstract duced in the out-of-focus parts of an image produced by a camera lens. These images are usually captured with DSLR The Bokeh Effect is one of the most desirable effects cameras using a large focal length and a large aperture size. in photography for rendering artistic and aesthetic photos. Unlike DSLR cameras, mobile phone cameras have limita- Usually, it requires a DSLR camera with different aperture tions on its size and weight. Thus, mobile phones cannot and shutter settings and certain photography skills to gener- generate the same quality of bokeh as DSLR can do as the ate this effect. In smartphones, computational methods and mobile cameras have a small and fixed-size aperture that additional sensors are used to overcome the physical lens produces images with the scene mostly in its focus, depend- and sensor limitations to achieve such effect. Most of the ing on depth map of the scene. One approach to solve these existing methods utilized additional sensor’s data or pre- limitations is to computationally simulate the bokeh effect trained network for fine depth estimation of the scene and on the mobile devices with small camera. There are some sometimes use portrait segmentation pretrained network works where additional depth information obtained from a module to segment salient objects in the image. Because dual pixel camera [19] or stereo camera (one main cam- of these reasons, networks have many parameters, become era and one calibrated subcamera) [15,3, 14] is incorpo- runtime intensive and unable to run in mid-range devices. rated in the model to simulate a more realistic bokeh image. In this paper, we used an end-to-end Deep Multi-Scale Hi- However, these systems suffer from a variety of disadvan- erarchical Network (DMSHN) model for direct Bokeh effect tages like (a) addition of a stereo camera or depth sensor rendering of images captured from the monocular camera. increases the size and the cost of the device and power con- To further improve the perceptual quality of such effect, a sumption during usage (b) structured light depth sensors can stacked model consisting of two DMSHN modules is also be affected by poor resolution or high sensitivity to inter- proposed. Our model does not rely on any pretrained net- ference with ambient light (c) it performs badly when the arXiv:2105.07174v1 [cs.CV] 15 May 2021 work module for Monocular Depth Estimation or Saliency target of interest is substantially distant from the camera, it Detection, thus significantly reducing the size of model and is difficult to estimate good depth map for further regions run time. Stacked DMSHN achieves state-of-the-art results leading because of the small stereo baseline of stereo cam- on a large scale EBB! dataset with around 6x less runtime era (d) these approaches can’t be used to enhance pictures compared to the current state-of-the-art model in process- already taken with monocular cameras as post processing ing HD quality images. since depth information is often not saved along with the picture. 1. Introduction To address these problems, we propose Deep Multi- Scale Hierarchical Network (DMSHN) for Bokeh effect The word “Bokeh” originated from the Japanese word rendering from the monocular lens without using any spe- “boke” which means blur. In photography, Bokeh effect cialized hardware. Here, the proposed model synthesizes refers to the pleasing or aesthetic quality of the blur pro- bokeh effect under the “coarse-to-fine” scheme by exploit- 1 ing multi-scale input images at different processing levels. not photorealistic as actual bokeh photos taken from DSLR Each lower level acts in the residual manner by contributing cameras because of using an approximated disk blur ker- its residual image to the higher level thus with intermediate nel to blur the background. Xu et al. [22] also proposed feature aggregation. In this way, low level encoders can pro- a similar approach where they had used two different deep vide additional global intermediate features to higher level neural networks to get depth map and portrait segmentation for improving saliency, and higher level encoders can pro- map from a single image. Then, they improved those initial vide more local intermediate features to improve fidelity of estimates using another deep neural network and trained a generated bokeh images. Our model does not depend on depth and segmentation guided Recursive Neural Network any Monocular Depth Estimation or Saliency Detection pre- to approximate and accelerate the bokeh rendering. trained network module, hence reducing the number of pa- Lijun et al. [13] presented a memory efficient deep neu- rameters and runtime considerably. We have also explored ral network architecture with a lens blur module, which a stacked version of DMSHN, namely Stacked DMSHN, synthesized the lens blur and guided upsampling module where two DMSHN modules were connected horizontally to generate bokeh images at high resolution with user con- to boost the performance. It achieves state-of-the-art results trollable camera parameters at interactive speed of render- in Bokeh Effect Rendering on monocular images. Though ing. Dutta [5] formulated bokeh image as a weighted sum this improvement in accuracy in stacked DMSHN comes of the input image and its different blurred versions ob- with increased runtime and model parameters, still DMSHN tained by using different sizes of Gaussian blur kernels. The and Stacked DMSHN can process HD quality images in 50 weights for this purpose were generated using a fine-tuned fps and 25 fps, respectively, which is significantly faster depth estimation network. Purohit et al. [17] designed a than other methods in the literature. We have also shown model consisting of a densely connected encoder and de- the runtime of our models deployed in mid-range smart- coder taking benefits of joint Dynamic Filtering and inten- phone devices to demonstrate its efficiency. In our exper- sity estimation for the spatially-aware background blurring. iments, we have used a large-scale EBB! dataset [7] con- Their model utilized pretrained networks for depth estima- taining more than 10,000 images collected in the wild with tion and saliency map segmentation to guide the bokeh ren- the DSLR camera. dering process. Zheng et al. [8] used a multi-scale pre- dictive filter CNN consisting of Gate Fusion Block, Con- 2. Related Work strained Predictive Filter Block, and Image Reconstruction Block. They trained their model using image patches with Many works in Bokeh Effect Rendering leveraged depth concatenated pixel coordinate maps with respect to the full- information from images captured by two cameras. Liu scale image. Xiong et al. [8] used an ensemble of modified et al. [14] presented a bokeh simulation method by using U-Net consisting of residual attention mechanism, multiple depth estimation map through stereo matching. They have Atrous Spatial Pyramid Pooling blocks, and Fusion Mod- also designed a convenient bokeh control interface that uses ules. Yang et al. [8] used two stacked bokehNet with addi- a little user interaction for identifying the salient object and tional memory blocks to capture global and local features. control the bokeh effect by setting a specific kernel. Busam The generated feature maps from each model are concate- et al. [3] also proposed a stereo vision-based fast and ef- nated and fed to a Selective Kernel Network [11]. ficient algorithm to perform the refocusing by using high- Ignatov et al. [7] presented a multi-scale end-to-end quality disparity map via efficient stereo depth estimation. deep learning architecture, PyNet, for natural bokeh image et al. Recently, Luo [15] proposed a novel deep neural ar- rendering by utilizing both input image captured with nar- chitecture named wavelet synthesis neural network (WSN), row aperture and pre-computed depth map of the same im- to produce high-quality disparity maps on smartphones by age. In this paper, we propose a fast, lightweight efficient using a pair of calibrated stereo images. However, these network, Stacked DMSHN for Single-image Bokeh effect methods are ineffective in the case of monocular cameras rendering. Our model doesn’t require additional depth maps or in post-processing of previously captured images. which makes it suitable for post-processing photos. In one of the earliest works in Monocular Bokeh Effect Rendering, Shen et al. [18] used a Fully Convolutional 3. Proposed Method Network to perform portrait image segmentation and using this segmentation map, they generated depth-of-field im- We used Stacked Deep Multi-Scale Hierarchical Net- age with uniformly blurred background on portrait images. work (DMSHN) for Bokeh Effect Rendering. The model Wadhwa et al. [19] proposed a system to generate synthetic diagram of Stacked DMSHN is shown in Fig.1. Stacked shallow depth-of-field images on mobile devices by incor- DMSHN consists of two DMSHN base networks which are porating a portrait segmentation network and depth map cascaded one after another. The details of DMSHN archi- from the camera’s dual-pixel auto-focus system. But their tecture are described in the following. system’s limitation is that their bokeh rendering method is Base Network: We used Deep Multi-Scale Hierarchi- Figure 1. Stacked Deep Multi-Scale Hierarchical Network. Figure 2. Encoder and Decoder Architecture. Numbers below the curly braces denote number of output channels of the corresponding layers. org cal Network (DMSHN) as the base network.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us