<<

Fast and Stable Balancing for Images and Augmented Reality

Thomas Oskam 1,2 Alexander Hornung 1 Robert W. Sumner 1 Markus Gross 1,2 1 Disney Research Zurich 2 ETH Zurich

Abstract

This paper addresses the problem of globally balanc- ing between images. The input to our algorithm is a sparse set of desired color correspondences between a source and a target image. The global trans- formation problem is then solved by computing a smooth Source Image Target Image Color Balanced vector field in CIE Lab color space that maps the of the source to that of the target. We employ normalized ra- dial basis functions for which we compute optimized shape parameters based on the input images, allowing for more faithful and flexible color matching compared to existing RBF-, regression- or histogram-based techniques. Further- more, we show how the basic per-image matching can be Rendered Objects efficiently and robustly extended to the temporal domain us- Tracked Colors balancing Augmented Image ing RANSAC-based correspondence classification. Besides Figure 1. Two applications of our color balancing algorithm. Top: interactive color balancing for images, these properties ren- an underexposed image is balanced using only three user selected der our method extremely useful for automatic, consistent correspondences to a target image. Bottom: our extension for embedding of synthetic graphics in video, as required by temporally stable color balancing enables seamless compositing applications such as augmented reality. in augmented reality applications by using known colors in the scene as constraints.

1. Introduction even for different scenes. With today’s tools this process re- quires considerable, cost-intensive manual efforts. Similar Pablo Picasso highlighted the importance of color in issues arise in augmented reality applications, where com- painting with the famous quote, “They’ll sell you thousands puter generated graphics must be embedded seamlessly and of . Veronese and emerald green and cadmium in real-time into a video stream. Here, achieving consis- green and any sort of green you like; but that particular tent colors is critical but highly challenging, since - green, never.” Today, the same struggle to capture the per- balance, time, correction, and other factors fect nuance of color lives on in digital and may continually change during acquisition and cannot be image-based applications. The colors that are ultimately perfectly controlled or addressed through calibration alone. recorded by both consumer-grade and high-end digital cam- Therefore, an efficient correction and adaptation of colors eras depend on a plethora of factors and are influenced that is agnostic to the underlying capture hardware is highly by complex hardware and software processing algorithms, desirable in such AR-related applications. However, aside making the precise color quality of captured images difficult for a few preliminary efforts [13, 14] practically applicable to predict and control. As a result, one generally has to rely solutions to this problem are yet to be developed. on post-processing algorithms to fix color-related issues. This paper contributes a real-time color balancing algo- This problem is generally known as color balancing or rithm that is able to globally adapt the colors of a source color grading, and plays a central role in all areas involving image to match the look of a target image, requiring only capturing, processing and displaying image data. In digi- a sparse set of color correspondences. Fundamentally, our tal photography and filmmaking, specifically trained artists approach is based on radial basis function (RBF) interpo- work hard to achieve consistent colors for images captured lation. However, we show that, for an optimal RBF-based with different , often under varying conditions and color matching, it is crucial to optimize for the shape of the

4321 employed basis functions. As a second main contribution, of user controlled edits is typically targeted towards local we then extend the per-image computation to be temporally color changes. Our work, in contrast, targets global color consistent, enabling the application of the basic balancing balancing using only sparse and possibly incomplete input. mechanism in video-based applications such as augmented Li et al. [15] provide an approach for image and video reality. A key component to the success of this system is editing with instant results. They employ radial basis func- a reliable RANSAC-based classification algorithm that se- tion interpolation in a feature space to propagate sparse ed- lects a temporally stable set of correspondences used for the its locally. In contrast to this work, we are formulating the basic color balancing. We present a fast GPU-assisted im- problem as a global color space transformation independent plementation of our color balancing algorithm that achieves from the image space and we optimize the shape of basis run times of 1ms and below on real-world applications. In functions to achieve color balancing behavior found in ex- this paper and the accompanying video, we show results for ample images. We show that the basis function used by Li et interactive color editing of images as well as for improved al. is suboptimal for the purpose of global image balancing. visual quality and consistency of video and computer gen- and balancing: Color calibration is erated images in augmented reality applications. the process of computing the color distribution function of a to allow recorded images to be consistently colored. 2. Related Work Adams et al. [3] gives an overview of standard camera cal- In this section, we discuss related work on histogram ibration techniques using linear transformations. Usually, matching, user controlled color transfer, and color calibra- a 3x3 matrix is computed using regression. A related area tion. is color consistency. Humans perceive colors to be consis- Statistics and Histogram Matching: This is the pro- tent even under different conditions. This property cess of transferring the color distribution statistics from one is not available for digital images. Colors have different image to another image, pioneered by Reinhard et al. [20]. numerical values under different illuminations. In order to They proposed computing the mean and standard devia- counter this issue, different algorithms have been proposed. tion of the color distribution in the color space. Then, by An overview over different algorithms is provided by Ar- scaling and translating the parameters onto the ones of the gawal et al. [4] and Cohen [7]. In contrast to our approach, target image, they achieve automatic color transfer. The many of these methods do not apply additional information approach has been applied to histograms and extended to such as correspondences or known colors in a scene to bal- achieve a mapping in different color spaces [25], minimum ance the images. Therefore, these methods are less interac- cost linear mapping [19], and per-region execution of the tive and less robust when applied to videos. algorithm [11]. Further extensions have been proposed that There is also significant work that enhances the quality optimize the images after matching to avoid discontinuities of imagery using a database of example images [8, 12, 21, in color gradients [18, 26]. 23]. The work of Yang et al. [27] achieve color compensa- While statistics and histogram matching methods for au- tion by replicating the non-linear relationship between the tomatic color transfer can produce very appealing results, camera color and luminance. Wang et al. [24] transfer color they provide only limited control. Direct control over color and tone style of expert photographed images onto the im- changes is often required in and difficult to ages from cheap cameras by learning the complex transfor- achieve with these methods. mation encoded in local descriptors. Recent work of Lin Color transfer with user interaction: Abadpour et et al. [16] revisits the radiometric calibration of cameras al. [2] first proposed an image color transfer algorithm with by investigating 10,000 images from over 30 cameras and user interaction. In their work, the user is allowed to de- introduces a new imaging model for computing the color note corresponding regions between images. Colors are response curves. Our parameter optimization of the radial then locally matched using fuzzy principal component anal- basis functions is also based on extracting image proper- ysis. Other methods have been proposed that aid the user in ties form data sets. However, we design our functions to matching [17, 22]. The user is allowed to se- be sparsely sampled so that they provide appealing results lect a local region, and the actions are propagated through- when interpolated in order to considerably increase perfor- out adjacent parts of the image. An et al. [6] propose a mance. scribble-based framework for color transfer between im- Color Balancing in Augmented Reality: Only recently ages. Fast and interactive methods for editing images and there has been research in improving the compositing of im- video have also been proposed. An et al. [5] propagate lo- ages in augmented reality. Klein et al. [13] examine and cal edits by assisting a user with iteratively refining rough reproduce several image degradation effects of digital cam- edits to areas of spatial and appearance similarity. Farbman eras to increase the realism of augmentation. However, they et al. [9] propose the use of diffusion maps to propagate are not considering color shifts due to camera parameter image editing information. In these approaches the impact changes. The works of Knecht et al. [14] propose to bal-

4322 1 avg. function norm. Shepard deviation Gaussian inv. quadric

0 Color Space Distance 0 30 Image Gamut + Smooth Transformed Correspondences Vector Field Image Gamut Figure 3. Plots of the basis functions we consider for our color transfer. On the left, the three functions are shown in relative scale. Figure 2. The idea of vector space color balancing. On the left, Note their different fall-off behaviors. On the right, the curves are the source colors are shown in and the corresponding target shown with optimized shape parameters in their absolute scale. colors in white. The idea is to interpret all correspondences as The optimized curves significantly differ in their final spread and vectors in color space and smoothly extend them to a vector field variation. with which the gamut of the image is transformed. This procedure creates a continuous and flexible color transfer function. suits a completely parallel implementation for each in ance rendered images using colors in the scene. However, an image. The individual weights wi allow the treatment they are applying a linear regression model and do not cor- of the φi as radial basis functions (RBF). RBF interpolation rect for corrupted colors due to occlusions or reflections. performs a least squares minimization on the overlapping support between the individual φi by choosing weights wi 3. Vector Space Color Balancing such, that the squared difference between the data vectors vi and the weighted sum of the basis functions at ci is min- Given a sparse set of color correspondences, the goal is imized in the least squares sense to transform the color gamut of an image such, that (i) col- 2 ors that have references get as close as possible to that point argmin( vi v(ci) ). (2) wi || − || in the color space, and (ii) colors, for which no references are available, are transformed in a plausible way. This results in a system of equations v =Φw per color We interpret the given color correspondences as vectors dimension, where the matrix Φ is the same in all three sys- in the color space. In order to produce a transformation tems and contains, at each position (i, j), the normalized of the gamut, we compute a vector field around the refer- support of all cj seen at the interpolation point ci. The ence vectors that achieves the aforementioned goals. The weight vectors wi can be composed by inverting Φ and as- concept is demonstrated in Figure 2. The correspondence sembling the individual components from the three systems vectors themselves represent constraints. We use normal- of equations. ized radial basis function interpolation that propagates these Conflicting constraints of two source colors ci and cj can constraints to the rest of the color space. Section 3.1 ex- cause the matrix Φ to become near-singular. This causes plains how this is performed. The selection of basis func- components of the resulting wi to become . Neg- tions, however, influences how the correspondence vectors ative values create an inverted distribution of the constraint are propagated through the color space and ultimately how vectors vi in the color space an produce undesired arti- colors without constraints are balanced. Therefore, we pro- facts [15]. We avoid this problem by clustering support pose optimizing the shape of a basis function based on ex- points that lie within the limit of perceivable distance (1.0 in ample images. This is discussed in section 3.2. the CIE Lab space). Additionally, we select basis functions that have a small spread to avoid large overlapping support 3.1. Vector Field Computation between constraints. This keeps the matrix Φ from degen- erating. In Section 3.2, we discuss how to optimize basis For a given pair of colors in the three- (ci,di) functions and show that our normalized Shepard function dimensional CIE Lab space, we define the as support ci combines optimal and robust interpolation results with a points, and the vectors as data values. For vi = di ci very small spread, in contrast to the generally utilized Gaus- each vector , a basis function|| −is provided|| that describes vi φi sian function. the weight with which the vector is distributed in space. For each new point e in the color space, an according transla- 3.2. Choice of Basis Functions tion vector v(e) is computed as a normalized sum of the The choice of an appropriate basis function both influ- weighted φ i ences the global impact of color constraints and the stability 1 n of the interpolation [15]. Using RBF interpolation, we can v(e)= φ (e)w . (1) n φ (e) i i make sure that the effect of overlapping support between i=1 i i=1  the basis functions is minimized. However, it is still unclear The summation of the support functions allows an inde- which functions should be used for interpolation and how to pendent evaluation of each point in the vector field, which select their parameters. To find out how to best interpolate

4323 Time Gain Color Intensity

10 norm. Shepard Gaussian inverse quadric 5 Error ε Figure 4. Average reconstruction error of the considered basis functions while interpolating 50% of the color checker fields under changing camera parameters. The error is shown as a function of the -values for the normalized Shepard (), the Gaussian (green), and the inverse quadric () basis functions. The plots show that the value significantly influences the result. colors given a sparse set of correspondences, we test the fol- is selected as the base frame. In this frame, 50% of the color lowing three functions with different fall-off behavior over fields are selected at random, and used as data points. The distance (see Figure 3). corresponding color fields in all other frames in the data set are used as references. The score of a function (with  normalized Shepard s (c )=(1+ c c )− i j || i − j|| a specific -value) is now computed as the average recon- 2 struction error of the other 50% (not referenced) colors in Gaussian g (c )=e ( ci cj ) i j − || − || the base frame across all frames of the data set. Figure 4 1 α shows plots from some of our tests. The error is shown for inverse quadric qi(cj)= 1+( c− c )2 + α || i− j || the normalized Shepard (red), the Gaussian (green), and the The normalized version of the Shepard function has the inverse quadric (blue) functions for an increasing -value. quickest fall-off. It gets surpassed after a while by the Gaus- The plots are shown with differently scaled  for each func- sian function that decreases exponentially. The constant tion, but with the same scaling across all tests. value we add to the inverse quadric function (α =0.01 in For each of the three functions there is an optimal . our experiments) guarantees a minimum value in one of our This surprising result shows that there is a clear correla- functions to distances at infinity. tion between the spread of a basis function and its capabil- Each of the functions is parameterized with a variable  ity of reconstructing global color changes. Second, across that influences the width of the function spread. This pa- all tests, the magnitude of the optimal  is in a similar rameter allows to find the optimal reconstruction capability range. The -values we determined though our tests are of each function by optimizing for .  =3.7975 (σ =0.5912) for the normalized Shepard, To compare the performance of each parameterized  =0.0846 (σ =0.0289) for the Gaussian, and  =0.0690 function on reconstructing colors, we first create data (σ =0.0289) for the inverse quadric function, where σ- sets by filming a color calibration checker (Digital Col- values are the standard deviations. orChecker SG in our experiments). During video capture, The shape of the optimized functions (and their devi- internal camera parameters (e.g. exposure or contrast) are ation due to σ as dashed curves) are shown Figure 3 on changed individually so that the global color changes are the right. Our normalized version of the Shepard func- part of the data. tion clearly shows the smallest spread of all three func- In each data set, one frame with default camera settings tions, rendering it the most stable function for RBF inter- polation. Note, that the Gaussian function has significantly wider spread and also a much higher variance, making it a suboptimal choice for global color balancing. The impact on the number of references using our normalized Shepard function with optimal  for global color balancing is demon- strated in Figure 5. With only a few correspondences, the error in the CIE Lab space is reduced considerably.

Target Image Cool 1 Color 2 Color 4 Color 6 Color 4. Temporal Color Tracking White Bal. Ref. Ref. Ref. Ref. Figure 5. Example of the global impact of our color balancing us- A typical scenario where the permanent update of the ing only a few correspondences. The target image on the left is re- constructed using a of the same scene with a different improves the visual quality is augmented re- white balance as source image. The top row shows the successive ality, where a synthetic rendering should be embedded in addition of references (red dots). The images in the bottom row a seamless manner into a video stream. In order to tackle visualize the difference in CIE Lab space between the balanced this problem, we propose to use known color patches in the image and the original. scene (See Figure 1). If these colors can be robustly tracked

4324 Tracked Colors Source Colors

RANSAC (A ne Model)

Reference Colors Last Valid Colors Target Colors Frame Detect Outlier Reconstruct Colors Create References Figure 6. Overview of our stable color tracking pipeline. First, color values are extracted from the video frame and corrupt ones are rejected using RANSAC and temporal coherence. Then, missing colors are added using information from the previous frames. Finally, the stable color list is combined with colors from a digital image to create the transfer constraints that can be used for balanced augmentation of rendered objects. over time, they can be used as color references for their sup- frames. These colors are used as valid target colors. Inlier posed values. These references then automatically balance colors that have been detected as outlier in one of the past κt rendered images to the video footage, adapting the render- frames may be false positives. These colors are also treated ings as the color of the video frames change due to camera as outlier. In our experiments we found that false positives adjustments. rarely appear for more than two frames. Therefore, in our We base our color tracking on the observation that de- experiments, we mostly used κt = 3. sired color changes in the video stream are global. These changes are due to the camera automatically adjusting inter- 3 Reconstruct Colors. At this point, corrupted colors nal parameters such as exposure or gain. The color patches have been removed from the list of di. Additionally, the in the scene that receive these color changes will be used risk of false positives has been decreased by taking temporal as constraints for the balancing of their known unbalanced consistency into account. Now, in order to maximize the values. However, due to occlusions or specular reflections, amount of colors available for color balancing, we can use some of the tracked color patches will be corrupt. These the last valid colors of each tracked point from the previous color distortions (e.g. occlusions, reflections) happen lo- frames. These colors are stored when a tracked value is cally. This fact can be utilized to remove them as outliers. regarded as trusted inlier (being not an outlier for more than Figure 6 gives an overview over our robust color tracking κt frames). Colors values, that are suddenly corrupt due to pipeline. For each frame, the following steps are performed: occlusions or specular reflections can then be replaced by 1 Extract Colors. The first step is to extract the colors an updated version of their last valid value. The update of these last valid colors is performed by applying the global di from the video frame. The position tracking provides the positions in the image where the target colors are found. In affine transformation performed by all trusted inlier colors our implementation we use ARToolKit marker tracking [1], since the last frame. In our implementation, we perform this but any other position tracking solution can be used as well. color reconstruction only for outlier that have been inliers κ To reduce per-pixel noise we perform a neighborhood aver- for more than c frames. This removes colors that were aging over a small window (usually 7 by 7). only detected as inliers for a short period and increases the robustness. We call these colors comeback points. In our 2 Detect Outlier. To detect corrupted colors we can fit a implementation we have set κc = 10. global model between the extracted di and a set of reference colors. These reference colors ri can be taken from the dig- 4 Create References. The final color references (ci, di) ital image of the marker or extracted from the first frames for the balancing of rendered objects can now be created. of the video in a pre-processing step. In order to keep the For all di, that are either trusted inlier or a comeback points, number of false positives low a conservative model should the corresponding color ci can be extracted from the digital be fitted between the ci and ri. We chose the affine trans- image of the marker. The constraints (ci, di) now describe formation model ci = A ri + t [3], which can be computed the color transfer function from rendered footage to the cur- analytically and avoids over fitting to corrupted colors. rent video frame color space. Outlier in the scene colors di can now be detected by fit- ting the affine model using the random sample consensus With our fast global color balancing for sparse color cor- algorithm (RANSAC) [10]. This first step allows remov- respondences we are able to adapt the colors to fit to a target ing most of the corrupted colors. However, false positives image. With our robust color tracking, we can extend this may slip though this outlier detection. In order to further into the temporal domain. Using a known marker in the increase the robustness of the tracking, we separate the re- scene (e.g. the cover of a book) we can balance newly ren- maining inliers into two groups. Trusted inliers are colors dered footage to augment the video and increase the realism that have not been detected as outliers for more than κt of augmented reality applications.

4325 5. Implementation sion in the size of the number of references. The RBF inter- polation (blue graph) however performs with around 0.05 Our color balancing algorithm can be implemented by ms almost independently of the number of references. simply extending any given fragment shader. The reference colors ci and weights wi are transferred to the GPU as tex- 6. Results and Applications ture data. The matrix inversion is calculated beforehand on the CPU as it only needs to be performed once. The GPU In this section we present various color balancing results fragment shader algorithm is described in Algorithm 1. for image editing and augmented reality applications, and compare them to realated works. Please see also the accom- Algorithm 1 Fragment Shader Color Balancing panying video where we show several examples for both 1: e ← pixelColor() color balancing and augmented reality. 2: eLab ← rbg2lab(e) 3: ve ← (0, 0, 0) 6.1. Interactive Color Transfer and Correction 4: s ← 0 e Given a source image and a target image with a desired 5: for i = 1 → n do look, the user can interactively define a by 6: [ci, wi] ← dataTexture(n/i) clicking correspondences. The result is instantly visible, 7: r ← phi(ci, eLab) and the user can drag the correspondence points around to 8: ve ← ve + r wi 9: se ← se + r see the effect. A number of examples are shown in Figure 8 10: end for that demonstrate the flexibility of our approach. Due to the 11: eLab ← eLab + ve/se global nature of the vector field color transformation, it is 12: return lab2rgb(eLab) very easy to balance an image to exactly match an example photograph. Even completely changing the color composi- tion of an image can be achieved robustly using the same First, the original color e 1.30 kind of input. of the pixel is determined (line 1). The pixel color is the re- Figure 9 shows a series comparisons of our color balanc- sult of the given al- ing method with other color balancing approaches. Since Time [ms] Time gorithm. This color is then our method is designed and optimized for global color bal- transformed to CIE Lab space 0.05 ancing, it is able to transfer the colors from the target image (line 2). Then, the variables 5 10 20 40 to the source image with only a sparse set of references. # References User edits in Photoshop changing indirect parameters (, ve and se are initialized (line 3 & 4). They are used to com- saturation, etc.) can be unintuitive and time consuming, and Figure 7. The perfor- it is difficult to achieve exact results. Achieving the correct pute intermediate results of the mance of our imple- transformation vector computa- mentation. The green balance between the and red colors in the bottom tion. Now, a loop is performed function shows the run- example are difficult with only indirect manipulation. Both n-times, which is the number time for the color bal- the Photoshop Color Match function and the approach of ancing on the GPU, the of color correspondences. In Reinhard et al. [20] perform very similarly. Both methods blue function the time to automatically achieve a global balancing of the image but each loop, first, the color ci and solve the RBF equation cannot recreate the exact shade of the Taj Mahal. Using the weight vector wi are acquired system on the CPU. through texture lookup (line 6). sparse input used for our color balancing, the approach of Li et al [15] struggles, as their method is specifically designed Then, the function φ is applied using the current color eLab for local edits. In areas with strong gradients (e.g. the dome and the support color ci (line 7). This gives the function of the Taj Mahal) their approach produces artifacts. Linear value r, which is used to add the current weight wi to ve (line 8). Additionally, the sum of all the function values r is regression is, due to its limited flexibility, not able to satisfy the constraints. stored in se (line 9), so that after the loop the transformation v vector e can be normalized and used to translate the origi- 6.2. Augmented Reality Color Balancing nal pixel color in CIE Lab space (line 11). The final color is then transformed back into RGB space and returned to the While the color transfer tool only utilizes the flexible frame buffer (line 10). vector space color transfer, applications in augmented re- We have implemented the color balancing algorithm on ality become possible when combining it with our robust an Intel Core i7 3.2 GHz with Nvidia GeForce GTX 460. color tracking over time. A known marker in the scene, like The run-time in ms is shown in Figure 7 for an increasing the cover of a book, can be tracked and the colors refer- number of references. The solver for the equation system enced with a digital copy of the image. Figure 10 shows (blue graph) is the bottleneck, as it involves a matrix inver- two examples of color balancing for augmented reality.

4326 ossetrneigfragetdraiyapplications. reality augmented temporally- for for extension rendering an as consistent of set well sparse as a only correspondences, using balancing -based tive Discussion 7. desired the struggles. achieve [14] to regression linear able while is accuracy, balancing our and that demonstrate, tracking video color accompanying the Fig- in and in 10 results ure The differently. col- balanced similar are when renderings notice ored will and stream, already video that the colors in to appear sensitive especially is viewer augmen- A convincing for tation. balancing image proper perform to time. balance over white change the ex- as global shaded frame and one flat for the work in only colors would the amples to of adjustment able video. color the are the that the into we Note figure page to animated same Due the the embed on realistically book. more image a real in the where of placed example tracking is an shows image row animated bottom an The that colors tracked. for ob- not robust more are be and are stream that can video colors the for It in accurate available more is regression. approach our linear that served and correction color our

Source Image Our Results Target Images Source Images nti ae,w aepeetda prahfrinterac- for approach an presented have we paper, this In crucial is it applications, reality augmented in Generally, between comparisons of series a shows example top The iue8 eea xmlso u preclrblnig h ooe ice niae akteclrcorrespondences. color the mark images in circles colored The balancing. color sparse our of examples Several 8. Figure iue9 oprsno u prahwt te ehd o mg aacn sn precorrespondences. sparse using balancing image for methods other with approach our of Comparison 9. Figure Target Image Photoshop User Edits Color Match Photoshop 4327 Reinhard etal. uha de rgainsi h akriaewudb an be would explore. image to marker topic the interesting in gradients or edges information as using such sampling removed this get Improving features tracking. near our imple- by colors the current the where on our points surface of In marker sampling colors equidistant employ results. such we reject false mentation will producing algorithm of corners our instead like However, features use orthogonal edges. that is or requirement algorithms This tracking many well. work to to image the in address could that work issue. future this of camera in area to an present is algorithm colors setting transfer exposure of color amount the Coupling the increase gradient. there- a to and able spatially, not Our smooth is not remain. fore colors will few method a transfer only color where color point in- flatten the to can missing gradients images reconstruct underexposed to or able Over- not work. formation. is future of correction areas color to The us direct that limitations some have accordingly. optimally and colors images adjust example from changes color parameters mimic function to optimize to tool interpolation a provide of we functions, optimization global proposed our Through 2001 h oo rcigagrtmrle nfltclrareas color flat on relies algorithm tracking color The algorithms tracking and correction color proposed The Lin. Regression Li etal. 2010 Our ApproachOur luil oo aacn o nnw oos(..genfo) h otmrwsosa xml hr naiae mg spae na in more placed produce is as image animated well our an as where between colors image. example comparisons real image an References a exact of shows of row the series specular colors bottom recreating a to the The in tracking shows due frog). accurate by row arising green balanced more x) top (i.e. and both colors (red The book is unknown Outliers approach for triangle). balancing Our applications. (yellow color reality regression. plausible colors augmented linear comeback in and with balancing correction supported and color and tracking detected color are our reflection of examples Two 10. Figure 1]M nct .Talr .Prahfr n .Wimmer. M. and Purgathofer, W. Traxler, C. Knecht, M. [14] for cameras low-cost Simulating Murray. W. D. and Personalization Klein G. Lischinski. D. [13] and Kapoor, Piecewise- A. Kang, B. S. Hel-Or. [12] Y. and Moses, Y. Kagarlitsky, S. [11] consensus: sample Random Bolles. C. R. and Fischler A. M. [10] 3 .Aas .Prlk,adK pudn.Clrprocessing Color Spaulding. K. and Parulski, K. Adams, J. [3] color fuzzy efficient and fast A Kasaei. S. and Abadpour A. [2] ARToolKit. [1] 9 .Frmn .Fta,adD icisi ifso asfor maps Diffusion Lischinski. D. and Fattal, R. Farbman, and Z. Matusik, [9] W. Sunkavalli, K. Johnson, K. M. Dale, K. cameras. [8] for algorithm transfer. balancing color color A User-controllable Cohen. N. [7] Pellacini. F. and An X. [6] appearance-space all-pairs Appprop: Pellacini. F. and An X. An [5] Abidi. A. M. and Koschan, A. Abidi, R. B. Agarwal, V. [4] Our Balancing Untrusted Outlier ndgtlcameras. digital in 2004. In method. transfer 29-June-2012]. accessed [Online; artoolkit/. dpiecmr-ae oo apn o ie-elt ap- In mixed-reality plications. for mapping color camera-based Adaptive 2010. 16(3):369–380, Graph., compositing. reality augmented In enhancement. image of various under In acquired conditions. images of mappings color consistent 1981. fitting. 395, model for paradigm A 2010. editing. image edge-aware collections. photo online using In restoration Image Pfister. H. 2011. Processing, Image Digital 2010. 29(2):263–271, Forum, Graph. Comput. propagation. edit 2006. 42–54, pages Research , Recognition algorithms. constancy color of overview

ae 2722,2009. 2217–2224, pages ICCV, Trusted Comeback Input FramesInput http://www.hitl.washington.edu/ ae 6–6,2011. 165–168, pages ISMAR, ae 3121,2009. 2311–2318, pages ICCV, 73,2008. 27(3), Graph., Trans. ACM 86:03,Nv 1998. Nov. 18(6):20–30, Micro, IEEE ae 9 494, – 491 pages Proc., Signal on Symp. ae 7910,2010. 1799–1806, pages CVPR, C rn.Graph. Trans. ACM 24(6):381– ACM, Commun. EETas i.Comput. Vis. Trans. IEEE Tracked Points

ora fPattern of Journal Flat Shaded 29(6):145, , EE368 Our Balancing Our 4328 2]S ag .A i,C ag n .U e.Clrcompen- Color Lee. B.-U. and Kang, C. Kim, Y.-A. Yang, S. [27] transfer. -preserving Ma. L. and space. Xiao color X. correlated in [26] transfer Color Ma. L. and Xiao X. [25] color image Example-based Xu. Y.-Q. and Yu, Y. Wang, B. [24] Data- Xu. Y.-Q. and Chen, C. Wong, T.-T. Yu, Y. Wang, B. via transfer [23] color Local Tang. C.-K. and Jia, J. Tai, Y.-W. [22] correction color Hierarchical Bouman. A. C. and Siddiqui H. [21] Color Shirley. P. and Gooch, B. Ashikhmin, M. Reinhard, E. [20] Piti tech- F. transfer [19] style Color Neumann. L. and Neumann A. [18] Szeliski. R. and Uyttendaele, M. Farbman, Z. Lischinski, D. [17] Revisit- . S. M. and Susstrunk, S. Kim, J. S. Lin, T. H. [16] edits sparse of propagation Instant Hu. S.-M. and Ju, T. Li, Y. [15] ainuignnierlmnnerbcmoetcreo a of In curve component camera. luminance-rgb nonlinear using sation 2009. 28(7):1879–1886, Forum, Graph. put. In 2011. enhancement. style tone and 2010. 29(6):146, enhancement. theme color image driven 2005. 747–754, pages (1), In CVPR expectation-maximization. by segmentation probabilistic 2008. 17(11):2138–2155, images. phone cell camera for 2001. 21(5):34–41, images. between transfer 2007. Production, Media transfer. sual colour example-based for mapping colour linear match- histogram In saturation ing. and hue, using niques values. 2006. tonal 25(3):646–653, of Graph., adjustment local Interactive vision. computer 2011. color for calibration radiometric ing 2010. 2054, videos. and images on Lin. Regression ae 0–0,2006. 305–309, pages VRCIA, n .C oaa.Telna monge-kantorovitch linear The Kokaram. C. A. and e ´ ae 1–2,2005. 111–122, pages Aesthetics, Computational ae 1–2,2011. 617–626, pages (2), ISVC

Tracked Points Our Balancing Our 29(7):2049– Forum, Graph. Comput. EECmu.Gah Appl., Graph. Comput. IEEE 30(4):64, Graph., Trans. ACM EETas nIaeProc., Image on Trans. IEEE C rn.Graph., Trans. ACM Lin. Regression C Trans. ACM ICCV, Com- Vi-