Approximate Cross Channel Color Mapping from Sparse Color Correspondences

Approximate Cross Channel Color Mapping from Sparse Color Correspondences

2013 IEEE International Conference on Computer Vision Workshops Approximate Cross Channel Color Mapping from Sparse Color Correspondences Hasan Sheikh Faridul1,2, Jurgen Stauder1, Jonathan Kervec1, Alain Tremeau´ 2 1Technicolor Research & Innovation, France 2University of Saint Etienne, CNRS, UMR 5516, France {hasan.sheikh-faridul,jurgen.stauder}@technicolor.com [email protected] Abstract change (second step of color mapping), mostly channel- wise [2, 6, 9, 13, 27–31] color mapping models are used. We propose a color mapping method that compensates Channel-wise means that color mapping models are inde- color differences between images having a common seman- pendently estimated for each color channels (such as red, tic content such as multiple views of a scene taken from green and blue). In a channel-wise model, to estimate a different viewpoints. A so-called color mapping model is channel of one view, contribution is taken only from the usually estimated from color correspondences selected from corresponding channel of the other view, that is: those images. In this work, we introduce a color map- = ( ) ping that model color change in two steps: first, nonlin- Rview2 fR Rview1 ear, channel-wise mapping; second, linear, cross-channel Gview2 = fG(Gview1) (1) mapping. Additionally, unlike many state of the art meth- Bview2 = fB(Bview1). ods, we estimate the model from sparse matches and do not require dense geometric correspondences. We show that Channel-wise methods can be further categorized into linear well known cross-channel color change can be estimated [2, 28, 30, 31] or nonlinear [6, 9, 12, 14, 21] models. from sparse color correspondence. Quantitative and visual Channel-wise color mapping ignores the cross-channel benchmark tests show good performance compared to re- influences, but is simpler and faster to estimate. However, cent methods in literature. in practice many color changes such as strong saturation or hue differences, due to illumination change or imaging de- vices change, do not follow channel-wise model assump- 1. Introduction tion. In these situations, color change happens not only in red, green, and blue channels independently, but also Many applications such as stereo imaging, multiple-view in a cross-color-channel manner. Furthermore, these color stereo, image stitching, photo-realistic texture mapping or changes are often nonlinear. To our knowledge, very few color correction in feature film production, face the problem methods [9, 12, 14] addressed cross-channel and nonlinear of color differences between images showing semantically color change modeling – often not in the context of color common content. Possible reasons include: uncalibrated mapping. In this work, we would like to answer the ques- cameras, different camera settings, change of lighting con- tion whether it is possible and useful to estimate a cross- dition, and differences between imaging workflows. Color channel and nonlinear color map using only sparse color mapping is a method that models such color differences be- correspondences. tween views to allow their compensation. Like many methods from the literature, we start from Color mapping is usually based on: matching corre- the geometric feature correspondences. Then, our approach sponding features, computing color correspondences from consists of two steps. In the first step, we robustly fit a non- those matched features and finally estimating a color map linear, channel-wise model. In the second step, we estimate from the color correspondences. For instance, Figure 1 a linear but cross-channel color model. The advantage of shows an example of color mapping. In the literature, find- the first, channel wise step is that it captures the nonlinear ing color correspondences (first step of color mapping) is ei- color change. The advantage of the second step of linear but ther based on sparse [2, 4, 6, 13, 20, 21, 29, 31] or on dense cross-channel model is that it is robust. [2, 9] feature matching. To model the underlying color In order to compare our method with existing color map- 978-0-7695-5161-6/13978-1-4799-3022-7/13 $31.00 © 2013 IEEE 860 DOI 10.1109/ICCVW.2013.118 (A) Reference (B) Test (C) Mapped by our method Figure 1: An example of color mapping. (A) shows the reference view – colors that we want. (B) shows the test view – colors that we want to change so that it matches with the reference. (C) shows result after our color mapping which matches closely with the reference. ping methods, we use a qualitative (Section 4.1) as well as Despite significant contributions in related works, in the quantitative evaluation (Section 4.2). Since our model is context of color mapping, no work provides a method that neither constrained to channel-wise color changes nor re- can model both cross channel and nonlinear color change quires dense color correspondences, a wider range of appli- between different viewpoints only from sparse color corre- cations are possible. spondences. 2. Related works 3. A new cross-channel color mapping A significant number of color mapping methods are Our color mapping method is based on three steps. In the available in the literature that may compensate color differ- first step, we start by computing feature correspondences ences. These methods can be broadly classified into three by SIFT [16]. We collect colors near to these feature cor- main categories: first, geometric correspondence based respondences that provide so-called color correspondences. methods [2, 4–6, 9, 10, 13, 20, 22, 29, 31]; second statis- Here, color correspondences define which colors from one tical distribution transfer based methods [7, 23–28]; third, view correspond to which colors of the other view. Note user assisted methods [3, 15, 21]. that, to generate the feature correspondences, other meth- Very few methods have addressed the cross-channel ods such as SURF [1], MSER [18] etc. can be utilized as color change especially in the context of color mapping. well. In the second step, we robustly fit a channel-wise, For instance, computational color constancy algorithms [8] constrained, nonlinear color mapping model. In the third step, we estimate a linear, but cross-channel model. employ a cross-channel linear model, but do not apply nec- essarily to color mapping. Because, the objective of com- Let Ci ↔ Ci be the color correspondences with i its index, that is, Ci is a color from one view that corresponds putational color constancy algorithms is to correct for il- lumination spectrum such that it looks like to be captured to the color Ci from the other view. Let us assume three under canonical illuminant (such as CIE D65). Besides, color channels, that is, C{R, G, B}, where R, G, B stands scene illumination is required to be estimated or known as for red, green, and blue color correspondences, respectively. Let us also assume, M be the number of color correspon- prior. Hacohen et al. [9] use also similar linear model after dences. Therefore, both C and C are M ×3 matrices. Each a channel-wise mapping. Ci ↔ Ci can be read as a 3-tuple correspondence, that is, Ilie and Adrian [12] address cross-channel change but from camera color calibration point of view. The authors (R, G, B) ↔ (R ,G ,B ). estimate the cross-channel model from the 24 sample col- From these color correspondences, the color mapping ors of GretagMacbeth ColorChecker. Unfortunately, that model fC is estimated such that it minimizes the following cannot be extended to color mapping since it requires a Col- color differences: orChecker in the scene. M Although not necessarily color mapping, Kim et al. [14] 2 min (Ci − fC (Ci )) . (2) propose an imaging model that can correct colors when i=1 camera settings are not “correct”. A Radial Basis Function (RBF) based cross-channel color modeling is used in their In this work, we estimate fC in two steps: first, channel- work. wise nonlinear estimation (Section 3.1); second, cross- 861 channel linear estimation (Section 3.2). A pseudo-code of Algorithm 1 Approximate cross channel color mapping the whole proposed method is presented in Algorithm 1 Input: feature correspondences, Fi ↔ Fi by SIFT 3.1. Step1: Channel-wise estimation Remove outliers in Fi ↔ Fi by RANSAC for each Fi ↔ Fi do In this section, using the color correspondences, we es- timate the color mapping model, fC which describes and Collect color correspondences, Ci ↔ Ci compensates the color differences in a channel-wise man- end for ner. Step1: Channel-wise estimation Before estimation, for regularization purpose, we add for each color channel, C where C{R, G, B} do two imaginary white and black color correspondences. Estimate piece-wise cubic spline (con- Since we assume high quality content and no clipping, for a straint:monotonic) and compute robust standard P P P P -bit image, we add (111)↔ (111)and (2 2 2 ) ↔ deviation of residuals [19], σC P P P (2 2 2 ) i ↔ end for in C Ci . We do this to account for of- ten missing very dark and very bright colors in high quality for each Ci ↔ Ci do C pictures. This also help the estimation to avoid extrapola- if residuali > 5σC for any C then th tion. We estimate the color correction model in the follow- remove i Ci ↔ Ci ing three main steps. end if In the first step, for each color channel, we estimate a end for piece-wise cubic spline from the list of color correspon- Estimate model from “outlier-free”, i ↔ C Ci dences, Ci ↔ Ci having K knots. We set constraint during return model parameters the estimation so that the estimated curves are monotonic, Step2: Cross channel estimation ˆT ˆ −1 ˆT which is common in recent works [9, 11]. Similar to [9], θ =(C ∗ C) ∗ C ∗ (C ) we use K =6so that the model can capture some nonlin- ear changes.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us