US 2016.0323563A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2016/0323563 A1 SHEKH FARDUL et al. (43) Pub. Date: Nov. 3, 2016

(54) METHOD FOR COMPENSATING FOR Publication Classification COLOR DIFFERENCES BETWEEN DIFFERENT MAGES OF A SAME SCENE (51) Int. Cl. H04N I3/02 (2006.01) (71) Applicant: THOMSON LICENSING, H04N L/60 (2006.01) Issy-les-Moulineaux (FR) (52) U.S. Cl. (72) Inventors: Hasan SHEIKH FARIDUL, CPC ...... H04N 13/0257 (2013.01); H04N I/60II Cesson-Sevigne (FR); Jurgen (2013.01); H04N I/6052 (2013.01); H04N STAUDER, Montreuil/Ille (FR): I/6086 (2013.01); H04N 13/0239 (2013.01) Catherine SERRE, Saint Gregoire (FR); Alain TREMEAU, Saint Etienne (57) ABSTRACT (FR) (21) Appl. No.: 15/103,846 The method comprises the steps of —for each combination of a first and second illuminants, applying its corresponding (22) PCT Filed: Dec. 8, 2014 matrix to the colors of a first image to compensate Such as to obtain chromatic adapted colors (86). PCT No.: PCT/EP2014/076890 forming a chromatic adapted image and calculating the S 371 (c)(1), difference between the colors of a second image and the (2) Date: Jun. 10, 2016 chromatic adapted colors of this chromatic adapted image, —retaining the combination of first and second illuminants (30) Foreign Application Priority Data for which the corresponding calculated difference is the Smallest, —compensating said color differences by applying Dec. 10, 2013 (EP) ...... 133066936 the chromatic adaptation matrix corresponding to said Sep. 24, 2014 (EP) ...... 14306471.5 retained combination to the colors of said first image.

-

Patent Application Publication Nov. 3, 2016 Sheet 1 of 2 US 2016/0323563 A1

/.../us/c/ /m AA Drence

In Crement i

y///

Fig.1 Patent Application Publication Nov. 3, 2016 Sheet 2 of 2 US 2016/0323563 A1

MOD 1

MOD 2 ////

MOD 3

Fig.2 US 2016/0323563 A1 Nov. 3, 2016

METHOD FOR COMPENSATING FOR points. By relating these two (or more) 2D views of the same COLOR DIFFERENCES BETWEEN scene in a geometrical manner, 3D information about the DIFFERENT IMAGES OF A SAME SCENE scene can be extracted. 0007 Between different views or images of a same scene, TECHNICAL FIELD geometrical difference but also color difference occurs. For example, a scene can be acquired under two different 0001. The invention concerns a method and a system for illumination conditions, illum1 and illum2, and two different robust color mapping that explicitly takes care of change of viewpoints, viewpoint1 and viewpoint2. Under the view illuminants by chromatic adaption based illuminant map point1 and the illuminant illum 1, a first image Imgl is ping. captured. Next, under the viewpoint2 and the same illumi nant illum1, a second image Img2 is captured. We assume BACKGROUND ART that the camera and the settings of the second acquisition are 0002 Many applications such as Stereo imaging, mul identical to the camera and the settings of the first acquisi tiple-view stereo, image Stitching, photorealistic texture tion. As Img1 and Img2 are taken under the same illumina tion condition, illuml, and as they represent the same scene, mapping or color correction in feature film production, face their colors are generally consistent, at least for non the problem of color differences between images showing occluded scene parts and assuming Lambertian reflection, semantically common content. Possible reasons include: even if the two viewpoints are different. That means that the uncalibrated cameras, different camera settings, change of different features of the scene should have the same color in lighting conditions, and differences between different film both images. Img1 and Img2, although there may be geo production workflows. Color mapping is a method that metric differences. Then, a third image Img3 is acquired models such color differences between different views of a under the same viewpoint as for the second image, View same scene to allow the compensation of their color differ point2, but under another illuminant illum2. As Img1 and CCCS. Img3 are taken under different illumination conditions, 0003 Color mapping may be notably based on: geometri illum1 vs. illum2, the colors of at least some features of the cal matching of corresponding features between the different scene are different in Img1 and in Img3, and also there may views, computing color correspondences between the colors be geometric differences. of those different views from those matched features and 0008. In general, the human eye chromatically adapts to finally calculating a color mapping function from these a scene and to its illuminant, this phenomenon being known computed color correspondences. as "chromatic adaptation'. Chromatic adaptation is the abil 0004 Color mapping is then able to compensate color ity of the human visual system to discount the colour of the differences between images or views. These images or views illumination to approximately preserve the appearance of an of a particular scene can be taken from a same viewpoint or object in a scene. It can be explained as independent from different viewpoints, under a same or different illumi sensitivity regulation of the three cone responses of the nation conditions. Moreover, different imaging devices human eye. This chromatic adaptation means that, when (Smartphone vs. professional camera) with different device looking to a scene illuminated by a first illuminant, the settings can also be used to capture these images or views. human visual system adapts itself to this first illuminant, and 0005. Both dense and sparse geometric feature matching that, when looking to the same scene illuminated under a methods are reported in the literature to be used to calculate second illuminant different from the first one, the human color correspondences. Such that each color correspondence visual system adapts itself to this second illuminant. Accord comprises two corresponding colors, one from one view, and ing to this known chromatic adaptation principle of the another from another view of the same scene, and Such that human eye, in between these two chromatic adaptation corresponding colors of a color correspondence belong states, the human eye perceives different colors when look generally to the same semantic element of the scene, for ing to a same scene. instance the same object or the same part of this object. 0009. It is common to use the LMS when Geometric feature matching algorithms usually match either performing a chromatic adaptation of the color of an object isolated features (then, related to “feature matching') or of a scene as perceived by the human eye under a first image regions from one view with features or image regions illuminant to the color of the same sample object as per with another view. Features are generally small semantic ceived by the human eye under a second illuminant different elements of the scene and feature matching aims to find the from the first one, i.e. estimating the appearance of a color same element in different views. An image region represents sample for the human eye under a different illuminant. The generally a larger, semantic part of a scene. Color corre LMS color space is generally used for Such a chromatic spondences are usually derived from these matched features adaptation. LMS is a color space in which the responses of or matched regions. It is assumed that color correspondences the three types of cones of the human eye are represented, collected from matched features and regions represent gen named after their responsivity (sensitivity) at long (L), erally all colors of the views of the scene. medium (M) and short (S) wavelengths. 0006. In the specific framework of stereo imaging, 3D 0010 More precisely, for the chromatic adaptation of a Video content are usually created, processed and reproduced color, the XYZ tristimulus values representing this color in on a 3D capable Screen or stereoscopic display device. the XYZ color space as perceived under a first illuminant (by Processing of 3D video content allows generally to enhance a standard CIE observer) are converted to LMS tristimulus 3D information (for example disparity estimation) or to values representing the same color in the well-known “spec enhance 2D images using 3D information (for example view trally sharpened CAT02 LMS space to prepare for color interpolation). Generally, 3D video content is created from adaptation. “CAT” means “Color Adaptation Transform'. two (or more) 2D videos captured under different view “Spectral sharpening is the transformation of the tristimu US 2016/0323563 A1 Nov. 3, 2016

lus values of a color into new values that would have scene, a first step of the method according to the invention resulted from a sharper, more concentrated set of spectral would be to associate the first image to a first illuminant— sensitivities, for example of three basic color sensors of the assuming that this first image shows a scene under this first human eye. Such a spectral sharpening is known for aiding illuminant—and the second image of the same scene to a color constancy, especially in the blue region. Applying Such second illuminant—assuming that this second image shows a spectral sharpening means that the tristimulus values of a the same scene under this second illuminant. color are generated in this CAT02 LMS color space from 0018. A second step of the method according to the spectral sensitivities of eye sensors that spectrally overlap as invention would be to compensate the color differences less as possible, preferably that do not overlap at all Such as between these two different images of a same scene in a way to get the smallest correlation between the three tristimulus how the human visual system would compensate when values of this color. looking at this scene with different illuminants. This com 0011. Then, in this CAT02 LMS space, the chromatic pensation step by its own is known to be a chromatic adaptation of colors can be performed using a chromatic adaptation transform (CAT). adaptation matrix which is precalculated to adapt, into this 0019. A third step more specific to the method according color space, the color of a sample object as perceived under the invention is to determine the first and second illuminants a first illuminant into a color of the same sample object as associated respectively to the first and second image of the perceived under a second illuminant. A chromatic adaptation same scene by a search within a fixed set of Q possible matrix is then specific to a pair of illuminants. To calculate illuminants for this scene. A number of such matrices, the CMCCAT1997 or CMCCAT2000 can be used. When using the color appear ance model CMCCAT1997, the so-called “Bradford trans formation matrix’ is generally used. ()=2 T (Q-2)O 0012 Having then obtained the LMS tristimulus values representing the color of the sample object as perceived under the second illuminant, the corresponding XYZ tris combinations of two illuminants is tested and the best timulus values representing this color in the XYZ color combination of illuminants having the Smallest compensa space can be obtained by using the inverse of the color tion error is retained as first and second illuminants respec transformation above. tively for the first image and for the second image of the 0013 Besides changing illumination conditions of a same scene. According to the invention, the chromatic scene, there are others reasons for color differences such as adaptation transform (CAT) that is specifically adapted for change in shutter speed, change in white balancing of the the color compensation between the two illuminants of this camera, causing change of white temperature, change of best combination is used as color mapping operator to illumination intensity, change of illumination spectrum, etc. compensate the color differences between the first and the 0014. In the patent application US2003/164828 Second images. (KONIKA), a method is proposed to transform colors in a 0020 More precisely, the subject of the invention is a photograph acquired under a first illuminant into colors of a method for compensating color differences between a first photograph such as acquired under a , by image of a scene and a second image of the same scene, the using measurement of color chips. This method might colors of each image being represented by tristimulus values compensate the color differences between two images if the in a LMS color space, two images show the same scene under different illuminants. a set of The method relies on the presence of objects with known colors in the scene ("color chips'). The U.S. Pat. No. 7.362,357 proposes a related method of estimating the illuminant of the scene relying on the presence of objects ()=2 T (Q-2)O with known color in the scene ("color standards”). 0.015 The U.S. Pat. No. 7,068,840B2 (KODAK) allows possible combinations of two different illuminants out of Q calculating the illuminant of a scene from an image of this given illuminants being defined, scene. In the disclosed method, the image is segmented into for each combination of a first and second illuminants, a regions with homogeneous color, those regions are then chromatic adaptation matrix being calculated in order to modeled using the so-called dichromatic reflection model, compensate, in said LMS color space, the color of any and the illuminant of this scene is found by convergence of sample object of said Scene as perceived under said first lines of the reflection model of the regions. This method illuminant into a color of the same sample object as per relies on the presence of regions with homogeneous color. ceived under said second illuminant, 0016. In the U.S. Pat. No. 7,688,468 (CANON), is dis said method comprising the steps of: closed a method of compensating the color differences 0021 for each combination of a first and second illumi between initial color data and final color data that has been nants, applying its corresponding chromatic adaptation observed under initial and final illuminant, respectively. For matrix to the colors of said first image such as to obtain color compensation, the principle of chromatic adaptation chromatic adapted colors forming a chromatic adapted transform is applied. But the method relies in the knowledge image and calculating the difference between the colors of of initial and final illuminants. the second image and the chromatic adapted colors of this chromatic adapted image, SUMMARY OF INVENTION 0022 retaining the combination of first and second illu 0017 For the compensation of color differences between minants for which the corresponding calculated difference is a first image of a scene and a second image of the same the Smallest, US 2016/0323563 A1 Nov. 3, 2016

0023 compensating said color differences by applying tristimulus values representing, into said color space, the the chromatic adaptation matrix corresponding to said color of an object of said scene as perceived under the first retained combination to the colors of said first image. illuminant of said combination, these tristimulus values are 0024. When the colors of the first image and second transformed into tristimulus values representing the color of images are provided in other color spaces, as in a RGB color the same object but as perceived under the second illuminant space or XYZ color space, they are converted in a manner of said combination. Preferably, said color space (LMS) is known per se in tristimulus values expressed in the LMS the CAT02 LMS space. color space, before being color compensated according to 0032 Preferably, color correspondences between the first the method of the invention. Similarly, after such color image and the second image are determined and said global compensation, they are converted back from the LMS color color difference between the colors of the second image and space into the other original color space. Such conversion the chromatic adapted colors of the chromatic adapted first may require known spectral sharpening means such as the image is calculated as a quadratic Sum of the color distances Bradford spectral sharpening transform (see above). between colors that correspond one to another in the first and 0025 Preferably, the LMS color space is the CAT02 LMS the second image, wherein said Sum is calculated over all space. CAT02 LMS space is a “spectrally sharpened LMS color correspondences over the two images. color space. Any LMS color space that is spectrally sharp 0033 Such distances are preferably computed in ened can be used alternatively, preferably those generating CIELAB color space. tristimulus values of colors from spectral densities that 0034. A subject of the invention is also a device for overlap as less as possible Such as to get Small or even null compensating color differences between a first image of a correlation between these tristimulus values. scene and a second image of the same scene, 0026. Preferably, the first and second images have a wherein a set of semantically common content. The content can be consid ered as semantically common for instance if both images show same objects, even under different points of view or at different times between which some common objects may have moved. 0027. The subject of the invention is also a method for compensating color differences between a first image of a possible combinations of two different illuminants out of Q scene and a second image of the same scene, given illuminants is defined, wherein a set of wherein, for each combination of a first and second illumi nants of said set, a chromatic adaptation transform is given such that, when applied to the color of an object of said scene as perceived under said first illuminant, this color is transformed into a chromatic adapted color being the color of the same object but as perceived under said second illuminant, combinations of two different illuminants out of Q given said device comprising: illuminants is defined, 0035 a first module configured for applying each of said wherein, for each combination of a first and second illumi chromatic adaptation transform to the colors of said first nants of said set, a chromatic adaptation transform is given image such as to obtain chromatic adapted colors forming a such that, when applied to the color of an object of said corresponding chromatic adapted first image and configured scene as perceived under said first illuminant, this color is for calculating a corresponding global color difference transformed into a chromatic adapted color being the color between the colors of the second image and the chromatic of the same object but as perceived under said second adapted colors of this chromatic adapted first image, illuminant, 0036 a second module configured for retaining, among said method comprising: said combinations of said set, the combination of first and 0028 applying each of said chromatic adaptation trans second illuminants for which the corresponding calculated forms to the colors of said first image Such as to obtain global color difference is the smallest, and chromatic adapted colors forming a corresponding chro 0037 a third module configured for compensating said matic adapted first image and calculating a corresponding color differences by applying the chromatic adaptation trans global color difference between the colors of the second form corresponding to said retained combination to the image and the chromatic adapted colors of this chromatic colors of said first image, resulting into a color compensated adapted first image, first image. 0029 retaining the combination of first and second illu minants for which the corresponding calculated global color BRIEF DESCRIPTION OF DRAWINGS difference is the smallest, 0030 compensating said color differences by applying 0038. The invention will be more clearly understood on the chromatic adaptation transform corresponding to said reading the description which follows, given by way of retained combination to the colors of said first image, non-limiting example, and with reference to the appended resulting into a color compensated first image. figures in which: 0031 Preferably, the colors of each image are repre 0039 FIG. 1 is a flowchart illustrating a main embodi sented by tristimulus values in a color space and each ment of the method according to the invention; chromatic adaptation transform related to a combination is a 0040 FIG. 2 illustrates a device adapted to implement the chromatic adaptation matrix Such that, when applied to the main embodiment of FIG. 1. US 2016/0323563 A1 Nov. 3, 2016

DESCRIPTION OF EMBODIMENTS ond LMS color coordinates, such that this difference 0041 According to a general embodiment illustrated on represents a global color distance between the chro FIG. 1, the color compensating method of the invention matic-adapted image Im 1Ai and the second image compensates color differences between a first image Im 1 Im 2. and a second image Im 2. 0050 7. Retaining the best chromatic adapted mapping 0042. If the colors of these both images are represented operator CAM generating the Smallest difference. by device dependent color coordinates, these device-depen 0051 A preferred embodiment for the calculation of a dent color coordinates of both images are transformed in a global color distance between the chromatic adapted image manner known per se into device-independent color coor Im 1 Ai and the second image Im 2 will now be described. dinates in the XYZ color space using for instance given 0.052 Color correspondences being determined in a man color characterization profiles, the colors of the first image ner known per se between the first image Im 1 and the then being represented by first XYZ coordinates and the second image Im 2, the global color distance between the colors of the second image being represented by second two images is preferably computed as a quadratic Sum of the XYZ coordinates. color distances between colors that correspond one to 0043. Then, the compensation from the first to the second another in the chromatic-adapted first image Im 1 Ai and in XYZ color coordinates is done according to a non-limiting the second image Im 2. embodiment of the invention using the following steps: global color distance=X(Lab-CAT, *L'a'b')? 0044) 1. Transforming the first XYZ color coordinates of colors of the first image Im 1 into first LMS color 0053 wherein the sum is calculated over all color corre coordinates using a given spectral sharpening matrix spondences over the two images. such that the first LMS color coordinates of these colors 0054. Such distances are preferably computed in the can be assumed to correspond to narrower spectral CIELAB color space. In the equation above, Lab are the fractions such as to be less correlated than the first XYZ CIELAB coordinates of a color in the second image Im 2 coordinates of these colors; and L'a'b' are the CIELAB color coordinates of a corre 0045 2. Similarly, transforming the second XYZ color sponding color in the first image Im 1A. coordinates of colors of the second image Im 2 into 0055. In a preferred variation of this embodiment, the second LMS color coordinates using a given spectral given spectral sharpening uses the Bradford spectral sharp sharpening matrix Such that the second LMS color ening transform. coordinates of these colors can be assumed to corre 0056. The invention may have notably the following spond to narrower spectral fractions such as to be less advantages over existing and known methods: correlated than the second XYZ coordinates of these 0057 1. It does not require the measurement of objects colors; with known colors ("color chips' or “color standards') 0046) 3. Building a set of 0.058 2. It does not require the presence of regions with homogeneous color in the image. 0059) 3. It does not require the knowledge about the illuminants under which the images were acquired. 0060. The steps above of the various elements of the invention may be provided through the use of dedicated possible combinations Co., C. . . . . C. . . . C. of two hardware as well as hardware capable of executing software different illuminants out of Q given illuminants, with in association with appropriate Software. The hardware may 1:K

0064. Another specific embodiment of the method the XYZ color space, a color of the first image as perceived according to the invention will now be described. under this first illuminant illum1 can be achieved by a matrix 0065. The color compensating method of the invention Mil, 2 according to the formula 5 below: aims to compensate color differences between a first image and a second image. In other applications it might be requested to do this for parts of images only or to do this for (1) several image pairs. For the sake of simplicity of the below description, we will restrict in the following to the case of compensating color differences between a first image and a iiii.2 iiiiral Second image. 0066. We start from two different images of a same scene. wherein M is a CAT matrix-defined in eq. (2), The colors of these images are expressed in a RGB color whereas MCAT02 in this equation is defined in the article Space. quoted above entitled “The color appearance 0067. In this implementation, we select first some typical model. illuminants from our daily life. We select for instance 21 black body illuminants from 2500K to 8500K. This includes CIE standard illuminants such as illuminant A, illuminant Mun-ium2 = MCAT02' (2) D65 etc. We also add another three common fluorescent Lillum2 / Lilluml O O illuminants: F2, F7 and F11. These Q-24 illuminants are defined by their spectrum and by their XYZ color coordi O Millum.2f Mulum O CATO2 nates. We define O O Sillum2 /Silluml Lillum2 Xilum2 Milton2 = MCAT02 Yllan Sillum2 Zillum2 Lilluml Xiumi Milton1 = MCAT02. Yuni possible combinations of two different illuminants out of these Q-24 given illuminants, each combination having a Silluml Zillum.1 first illuminant associated with the first image and a second illuminant associated with the second image. (0071 wherein: 0068. Now, to compute the chromatic adaptation between 0072 X, Y, and Z are the tristimulus Val the two illuminants of each defined combination of illumi ues of the color of illuminant illum1 expressed in the XYZ nants, we will use below, in this specific embodiment, the color space; chromatic adaption transform of CIE CAM02. See: Moroney, M. D. Fairchild, R. W. Hunt, C. Li, M. R. Luo, and 0073 X, Y, and Z, are the tristimulus Val T. Newman, “The ciecam02 color appearance model”, in ues of the color of illuminant illum2 expressed in the XYZ Color and Imaging Conference, vol. 2002, no. 1. Society for color space; Imaging Science and Technology, 2002, pp. 23-27. 0074 L. M., and Sri are the tristimulus Val 0069. For each combination of a first and second illumi ues of the color of illuminant illum1 expressed in the LMS nants, we build a color mapping operator consisting of the color space; following concatenated steps: transformation of first RGB 0075 Lima, M2, and S2 are the tristimulus Val coordinates into first XYZ coordinates using a color char ues of the color of illuminant illum2 expressed in the LMS acterization profile, transformation of first XYZ coordinates color space. into first LMS coordinates using a spectral sharpening (0076. Therefore, if we choose Q-24 number of illumi matrix, application of the chromatic adaptation matrix nants, the total number of mappings (via CAT matrices Such adapted to transform color as perceived under the first aS Miami-illum2) would be illuminant into color as perceived under the second illumi nant, resulting into mapped chromatic-adapted LMS coor dinates, transformation of mapped LMS coordinates into mapped XYZ coordinates using the inverse spectral sharp ening matrix, transforming the mapped XYZ coordinates into mapped RGB coordinates using the inverse color char acterization profile, resulting in a set of M color mapping After computing all possible CAT matrices, we add an operators. Therefore, for each combination of a first and identity matrix for the case where both views are under the second illuminants, a color mapping operator is given Such same illuminant. We compute all these matrices in offline. that, when applied to the color of any object of the scene as (0077 Color correspondences RGB, sR'G'B', being given perceived under the first illuminant, this color is transformed in the RGB color space between the first image and the into a chromatic adapted color being the color of the same second image, we now need to find the right CAT matrix that object but as perceived under the second illuminant. This minimizes a global color distance between the mapped color mapping operator is then a chromatic adaptation chromatic-adapted first image and the second image. In this transform. specific embodiment, this global color distance will be 0070 For example, mapping from illuminant illum 11 to computed as a quadratic Sum of the color distance between illuminant illum2 a set of XYZ coordinates representing, in colors that correspond one to another RGB, sR'G'B', in the US 2016/0323563 A1 Nov. 3, 2016

first and the second image. Such a distance can be notably illuminant spectra. For example, in a given range of possible measured in the XYZ color space as shown below. spectra, Q sample spectra are sampled and their XYZ color 0078 Since CAT matrices M. above are coordinates are calculated. In another example, we might defined in XYZ color space, the first step here is to convert select a range or a number of correlated color temperatures the color correspondences RGB,

image (Im 2) and the chromatic adapted colors of wherein a set of this chromatic adapted first image (Im 1 Ai), retaining the combination (C) of first and second illuminants (ILL 1m, ILL 2m) for which the corre sponding calculated global color difference is the Smallest (Amini), possible combinations (Co. C. . . . . C. . . . C.) of two wherein the chromatic adapted first image that is different illuminants (ILL li, ILL 2i) out of Q given obtained by applying the chromatic adaptation trans illuminants is defined, form (CAM) corresponding to said retained com wherein, for each combination (C) of a first and second bination (C) to the colors of said first image (Im 1), illuminants (ILL li, ILL 2i) of said set, a chromatic adaptation transform (CAM) is given Such that, when is then retained as a color compensated first image (Im applied to the color of an object of said scene as 1-comp) compensating said color differences. perceived under said first illuminant (ILL 1i), this color is transformed into a chromatic adapted color 2. Method for compensating color differences according being the color of the same object but as perceived to claim 1, wherein the colors of each image are represented under said second illuminant (ILL 12), by tristimulus values in a color space (LMS), wherein each said device comprising: chromatic adaptation transform related to a combination (C) a first module configured for applying each (CAM) of of said set is a chromatic adaptation matrix (CAM) such said chromatic adaptation transform to the colors of that, when applied to the tristimulus values representing, said first image (Im 1) Such as to obtain chromatic into said color space (LMS), the color of an object of said adapted colors forming a corresponding chromatic scene as perceived under the first illuminant (ILL 1i) of said adapted first image (Im 1 Ai) and configured for combination (C), these tristimulus values are transformed calculating a corresponding global color difference into tristimulus values representing the color of the same (Ai) between the colors of the second image (Im 2) and the chromatic adapted colors of this chromatic object but as perceived under the second illuminant (ILL adapted first image (Im 1 Ai), 12) of said combination (C). a second module configured for retaining, among said 3. Method according to claim 1 wherein said color space combinations of said set, the combination (C) of (LMS) is the CAT02 LMS space. first and second illuminants (ILL 1m, ILL 2m) for which the corresponding calculated global color dif 4. Method for compensating color differences according ference is the Smallest (Amini), and to claim 1, wherein color correspondences between the first for retaining the chromatic adapted first image that is image and the second image being determined, said global obtained by applying the chromatic adaptation trans color difference (Ai) between the colors of the second image form (CAM) corresponding to said retained com bination (C) to the colors of said first image (Im 1) (Im 2) and the chromatic adapted colors of the chromatic as a color compensated first image (Im 1-comp) adapted first image (Im 1Ai) is calculated as a quadratic compensating said color differences. Sum of the color distances between colors that correspond 6. Electronic device incorporating the device of claim 5, one to another in the first and the second image, wherein said wherein said electronic device is capable of processing Sum is calculated over all color correspondences over the images. two images. 7. Computer program product comprising program code 5. Device for compensating color differences between a instructions to execute the steps of the method according to first image (Im 1) of a scene and a second image (Im 2) of claim 1, when this program is executed by a processor. the same scene, k k k k k