Light Field Compression Using Translation-Assisted View Estimation

Light Field Compression Using Translation-Assisted View Estimation

Light field compression using translation-assisted view estimation Baptiste Heriard-Dubreuil,´ Irene Viola, Touradj Ebrahimi Multimedia Signal Processing Group (MMSPG) Ecole Polytechnique Federale de Lausanne (EPFL) Lausanne, Switzerland firstName.lastName@epfl.ch Abstract—Light field technology has recently been gaining with sparse prediction to reconstruct the 4D light field from traction in the research community. Several acquisition tech- a predefined set of perspective views [4], [5]. The solution nologies have been demonstrated to properly capture light field was recently adopted as part of the JPEG Pleno Verification information, and portable devices have been commercialized to the general public. However, new and efficient compression Model (VM) (WaSP configuration) [6]. Rizkallah et al. and algorithms are needed to sensibly reduce the amount of data Su et al. use CNN-based view synthesis to reconstruct the that needs to be stored and transmitted, while maintaining an entire light field from 4 corner views, employing graph-based adequate level of perceptual quality. In this paper, we propose a transforms [7] or 4D-shape-adaptive DCT [8] to encode the novel light field compression scheme that uses view estimation to residuals. De Carvalho et at. propose the adoption of 4D DCT recover the entire light field from a small subset of encoded views. Experimental results on a widely used light field dataset show to obtain a compact representation of the light field struc- that our method achieves good coding efficiency with average ture [9]. The DCT coefficients are grouped using hexadeca- rate savings of 54:83% with respect to HEVC. trees, for each bitplane, and encoded using an arithmetic Index Terms—light field compression, view estimation, light encoder. The solution was also adopted as part of the JPEG field coding Pleno VM (MuLE configuration) [6]. In this paper, we propose a new method that uses view I. INTRODUCTION estimation to reconstruct the 4D light field structure from Light field photography has recently attracted the interest a given subset of views, which are translated to account of the research community, as it allows to visualize and for the camera disparity among the views. To improve the interact with three-dimensional scenes in a more realistic reconstruction quality, residual encoding is implemented us- and immersive way. However, the increased volume of data ing Principal Component Analysis (PCA) to reduce the rate generated in the acquisition requires new solutions to provide overhead. Results show that our method outperforms other efficient storage and transmission. In particular, new compres- state-of-the-art solutions in light field compression in terms sion solutions are needed to minimize the size of the light field of coding efficiency. data, while maintaining an acceptable visual quality. The paper is organized as follows. In Section II, we present Over the years, several solutions have been proposed to in details the proposed approach. Section III illustrates the val- encode light field images. Some propose to exploit view idating experiment, and in Section IV the results are presented synthesis or estimation to improve the coding efficiency. Jiang and analyzed. Finally, we draw some conclusions in Section V. et al. use HEVC to encode a low-rank representation of the light field data, obtained by using homography-based low- II. PROPOSED APPROACH rank approximation. They then reconstruct the entire light field by using weighting and homography parameters [1]. The global architecture of the encoder is represented in Zhao et al. propose a novel compression scheme that encodes Figure 1. The encoder receives as parameter the 4D light field and transmits only part of the views using HEVC, while the structure, along with the selected encoding parameters. Given non-encoded views are estimated as a linear combination of the chosen subset of views to be encoded (reference views), the already transmitted views [2]. Viola et al. proposed a it performs estimation of the remaining views. The reference graph learning approach to estimate the disparities among the views are compressed using HEVC/H.265 and transmitted to views, which can be used at the decoder side to reconstruct the decoder. Then, each view to be estimated is predicted the 4D light field from a subset of views [3]. Astola et al. through a linear combination of the compressed reference propose a method that combines warping at hierarchical levels views, which are translated to account for the displacement among different views. The estimation is performed on a block This work has been conducted in the framework of projects ”Light field Im- basis, which are identified through quad-tree segmentation, to age and Video coding and Evaluation” and ”Advanced Visual Representation better account for the presence of several depth planes in and Coding in Augmented and Virtual Reality” both funded by The Swiss National Foundation for Scientific Research under grant numbers 200021- the scene. The residuals for each estimated view are then 159575 and 200021-178854. computed, approximated using PCA, and transmitted to the Fig. 1: Encoder architecture. The reference views are indicated in yellow, whereas the estimated views are indicated in green. decoder along with the rest of the parameters for the view reference views. The position of the selected reference views estimation. is signalled using a binary mask, which is entropy encoded At the decoder side, the reference views are decompressed, and sent to the decoder. and the segmentation and prediction information is used to estimate the remaining views. The residuals are then added C. Translation estimation to obtain the final reconstructed views. In the following For each view to be predicted, the predictor will express subsections, we will present in details the components of the each block as the linear combination of a subset of blocks from encoder. the compressed reference views, after translation, to account for the camera disparity. A. Segmentation Using K reference views for the estimation, and defining as Given a point in a 3D scene P = (Vx;Vy;Vz), the disparity Ik the k-th reference view, Ti;j;k the corresponding translated between its projected points into two views A and B, pA and block i for view j with respect to view k, and wi;j;k the pB can be expressed as a function of the distance from P to the corresponding weight, we can compute the predicted block camera z (depth) and the translation between A and B tAB [2]. i of view j Igj [i] as follows: Thus, in order to precisely estimate one point in a given view K from a set of neighboring views, different translation factors X should be assigned to each depth plane. Igj [i] = wi;j;k · Ik [Ti;j;k] (2) To limit the complexity of the encoder and the additional k=1 information to be sent to the receiver, we approximate the sub- The translation parameters of Ti;j;k are obtained using phase division in different depth planes by using blocks, obtained by correlation: applying quad-tree segmentation. The segmentation is applied 2 3 on the sum of the distance between the extreme horizontal ^ ^∗ −1 Ij ◦ Ik views and the extreme vertical views of the Y channel, max F 4 5 (x; y) (3) x;y ^ ^∗ which will give us an estimation of the depth boundaries (this Ij ◦ Ik estimation will be used for all the views). More precisely, Where I^k and I^j are the reference and the translated views defining Ii;j the Y channel of view (i; j), i = 1;:::;M −1 and j = 1;:::;N, with M and N representing the angular in the Fourier domain, F is the inverse Fourier transform, resolution of the 4D light field, and j · j denoting the absolute ∗ is the complex conjugate, and ◦ is the Hadamard product. value operator, the sum of the distance C is computed as such: The method works with subpixel precision; however, to limit the overhead, the translations were rounded to integers. Considering all the translated references for object i in C = IbM=2c;1 − IbM=2c;N + I1;bN=2c − IM;bN=2c (1) matrix X (each column is one translated reference), and defining Y as the ground truth, we then compute the linear . combination weights solving the ridge regression problem, As mentioned above, a quad-tree based algorithm is selected which is the minimization of the Mean Square Error (MSE) to compute the blocks. Through a configuration file, the user with a regularization term (squared 2-norm): can decide the approximate number of blocks to be used in the estimation. T T min (Y − X · W ) (Y − X · W ) + η(W · W ); (4) B. Reference compression W In order to achieve a good compression efficiency, Where W is the weight matrix, η is a regularization coef- the selected reference views are arranged into a pseudo- ficient and I is the unity matrix. This formula actually admits temporal video sequence and compressed using video codec an analytical solution: HEVC/H.265 (version HM-15.0). However, it should be noted T −1 T that any image or video compression can be used to encode the W = X · X + ηI · X · Y (5) (a) I01 (Bikes) (b) I02 (Danger de Mort) (c) I04 (Stone Pillars Outside) (d) I09 (Fountain & Vincent 2) Fig. 2: Central perspective view from each content used in the validating experiment. This allows us to find the best coefficients in an efficient picture divided by the number of pixel per channel (13×13× and robust way. 434 × 625). The weights are saved as 16-bit floating point numbers. To reduce the overhead in the total bitstream, only a subset of B. Encoding parameters the reference views (neighbors) can be used to estimate each block.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us