(12) Patent Application Publication (10) Pub. No.: US 2016/0323563 A1 SHEKH FARDUL Et Al
Total Page:16
File Type:pdf, Size:1020Kb
US 2016.0323563A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2016/0323563 A1 SHEKH FARDUL et al. (43) Pub. Date: Nov. 3, 2016 (54) METHOD FOR COMPENSATING FOR Publication Classification COLOR DIFFERENCES BETWEEN DIFFERENT MAGES OF A SAME SCENE (51) Int. Cl. H04N I3/02 (2006.01) (71) Applicant: THOMSON LICENSING, H04N L/60 (2006.01) Issy-les-Moulineaux (FR) (52) U.S. Cl. (72) Inventors: Hasan SHEIKH FARIDUL, CPC ........ H04N 13/0257 (2013.01); H04N I/60II Cesson-Sevigne (FR); Jurgen (2013.01); H04N I/6052 (2013.01); H04N STAUDER, Montreuil/Ille (FR): I/6086 (2013.01); H04N 13/0239 (2013.01) Catherine SERRE, Saint Gregoire (FR); Alain TREMEAU, Saint Etienne (57) ABSTRACT (FR) (21) Appl. No.: 15/103,846 The method comprises the steps of —for each combination of a first and second illuminants, applying its corresponding (22) PCT Filed: Dec. 8, 2014 chromatic adaptation matrix to the colors of a first image to compensate Such as to obtain chromatic adapted colors (86). PCT No.: PCT/EP2014/076890 forming a chromatic adapted image and calculating the S 371 (c)(1), difference between the colors of a second image and the (2) Date: Jun. 10, 2016 chromatic adapted colors of this chromatic adapted image, —retaining the combination of first and second illuminants (30) Foreign Application Priority Data for which the corresponding calculated difference is the Smallest, —compensating said color differences by applying Dec. 10, 2013 (EP) .................................. 133066936 the chromatic adaptation matrix corresponding to said Sep. 24, 2014 (EP) .................................. 14306471.5 retained combination to the colors of said first image. - Patent Application Publication Nov. 3, 2016 Sheet 1 of 2 US 2016/0323563 A1 /.../us/c/ /m AA Drence In Crement i y/// Fig.1 Patent Application Publication Nov. 3, 2016 Sheet 2 of 2 US 2016/0323563 A1 MOD 1 MOD 2 //// MOD 3 Fig.2 US 2016/0323563 A1 Nov. 3, 2016 METHOD FOR COMPENSATING FOR points. By relating these two (or more) 2D views of the same COLOR DIFFERENCES BETWEEN scene in a geometrical manner, 3D information about the DIFFERENT IMAGES OF A SAME SCENE scene can be extracted. 0007 Between different views or images of a same scene, TECHNICAL FIELD geometrical difference but also color difference occurs. For example, a scene can be acquired under two different 0001. The invention concerns a method and a system for illumination conditions, illum1 and illum2, and two different robust color mapping that explicitly takes care of change of viewpoints, viewpoint1 and viewpoint2. Under the view illuminants by chromatic adaption based illuminant map point1 and the illuminant illum 1, a first image Imgl is ping. captured. Next, under the viewpoint2 and the same illumi nant illum1, a second image Img2 is captured. We assume BACKGROUND ART that the camera and the settings of the second acquisition are 0002 Many applications such as Stereo imaging, mul identical to the camera and the settings of the first acquisi tiple-view stereo, image Stitching, photorealistic texture tion. As Img1 and Img2 are taken under the same illumina tion condition, illuml, and as they represent the same scene, mapping or color correction in feature film production, face their colors are generally consistent, at least for non the problem of color differences between images showing occluded scene parts and assuming Lambertian reflection, semantically common content. Possible reasons include: even if the two viewpoints are different. That means that the uncalibrated cameras, different camera settings, change of different features of the scene should have the same color in lighting conditions, and differences between different film both images. Img1 and Img2, although there may be geo production workflows. Color mapping is a method that metric differences. Then, a third image Img3 is acquired models such color differences between different views of a under the same viewpoint as for the second image, View same scene to allow the compensation of their color differ point2, but under another illuminant illum2. As Img1 and CCCS. Img3 are taken under different illumination conditions, 0003 Color mapping may be notably based on: geometri illum1 vs. illum2, the colors of at least some features of the cal matching of corresponding features between the different scene are different in Img1 and in Img3, and also there may views, computing color correspondences between the colors be geometric differences. of those different views from those matched features and 0008. In general, the human eye chromatically adapts to finally calculating a color mapping function from these a scene and to its illuminant, this phenomenon being known computed color correspondences. as "chromatic adaptation'. Chromatic adaptation is the abil 0004 Color mapping is then able to compensate color ity of the human visual system to discount the colour of the differences between images or views. These images or views illumination to approximately preserve the appearance of an of a particular scene can be taken from a same viewpoint or object in a scene. It can be explained as independent from different viewpoints, under a same or different illumi sensitivity regulation of the three cone responses of the nation conditions. Moreover, different imaging devices human eye. This chromatic adaptation means that, when (Smartphone vs. professional camera) with different device looking to a scene illuminated by a first illuminant, the settings can also be used to capture these images or views. human visual system adapts itself to this first illuminant, and 0005. Both dense and sparse geometric feature matching that, when looking to the same scene illuminated under a methods are reported in the literature to be used to calculate second illuminant different from the first one, the human color correspondences. Such that each color correspondence visual system adapts itself to this second illuminant. Accord comprises two corresponding colors, one from one view, and ing to this known chromatic adaptation principle of the another from another view of the same scene, and Such that human eye, in between these two chromatic adaptation corresponding colors of a color correspondence belong states, the human eye perceives different colors when look generally to the same semantic element of the scene, for ing to a same scene. instance the same object or the same part of this object. 0009. It is common to use the LMS color space when Geometric feature matching algorithms usually match either performing a chromatic adaptation of the color of an object isolated features (then, related to “feature matching') or of a scene as perceived by the human eye under a first image regions from one view with features or image regions illuminant to the color of the same sample object as per with another view. Features are generally small semantic ceived by the human eye under a second illuminant different elements of the scene and feature matching aims to find the from the first one, i.e. estimating the appearance of a color same element in different views. An image region represents sample for the human eye under a different illuminant. The generally a larger, semantic part of a scene. Color corre LMS color space is generally used for Such a chromatic spondences are usually derived from these matched features adaptation. LMS is a color space in which the responses of or matched regions. It is assumed that color correspondences the three types of cones of the human eye are represented, collected from matched features and regions represent gen named after their responsivity (sensitivity) at long (L), erally all colors of the views of the scene. medium (M) and short (S) wavelengths. 0006. In the specific framework of stereo imaging, 3D 0010 More precisely, for the chromatic adaptation of a Video content are usually created, processed and reproduced color, the XYZ tristimulus values representing this color in on a 3D capable Screen or stereoscopic display device. the XYZ color space as perceived under a first illuminant (by Processing of 3D video content allows generally to enhance a standard CIE observer) are converted to LMS tristimulus 3D information (for example disparity estimation) or to values representing the same color in the well-known “spec enhance 2D images using 3D information (for example view trally sharpened CAT02 LMS space to prepare for color interpolation). Generally, 3D video content is created from adaptation. “CAT” means “Color Adaptation Transform'. two (or more) 2D videos captured under different view “Spectral sharpening is the transformation of the tristimu US 2016/0323563 A1 Nov. 3, 2016 lus values of a color into new values that would have scene, a first step of the method according to the invention resulted from a sharper, more concentrated set of spectral would be to associate the first image to a first illuminant— sensitivities, for example of three basic color sensors of the assuming that this first image shows a scene under this first human eye. Such a spectral sharpening is known for aiding illuminant—and the second image of the same scene to a color constancy, especially in the blue region. Applying Such second illuminant—assuming that this second image shows a spectral sharpening means that the tristimulus values of a the same scene under this second illuminant. color are generated in this CAT02 LMS color space from 0018. A second step of the method according to the spectral sensitivities of eye sensors that spectrally overlap as invention would be to compensate the color differences less as possible, preferably that do not overlap at all Such as between these two different images of a same scene in a way to get the smallest correlation between the three tristimulus how the human visual system would compensate when values of this color. looking at this scene with different illuminants.