
Ensuring Color Consistency across Multiple Cameras Adrian Ilie and Greg Welch University of North Carolina at Chapel Hill, Computer Science Department Sitterson Hall, CB# 3715, Chapel Hill, NC 27599, USA fadyilie,[email protected] Abstract Most multi-camera vision applications assume a single common color response for all cameras. However different cameras—even of the same type—can exhibit radically dif- ferent color responses, and the differences can cause signifi- cant errors in scene interpretation. To address this problem we have developed a robust system aimed at inter-camera Figure 2. Artifacts in the 3D reconstruction of color consistency. Our method consists of two phases: an a physical cookie box. Left: A reconstruction iterative closed-loop calibration phase that searches for the using uncalibrated cameras with the same per-camera hardware register settings that best balance lin- hardware settings. Right: A reconstruction earity and dynamic range, followed by a refinement phase with color­calibrated cameras. Artifacts are that computes the per-camera parametric values for an ad- eliminated and the colors are more natural. ditional software-based color mapping. ing from the reconstruction of a full-resolution color image from a half-resolution Bayer pattern image [3]. 1. Introduction To address the color matching problem we have devised a two phase process: an iterative closed-loop calibration Many of the computer vision and computer graphics ap- phase that searches for the per-camera hardware register plications that have emerged during the last decade make settings that best balance linearity and dynamic range, fol- use of multiple images. Some applications involve the ac- lowed by a refinement phase that computes the per-camera quisition of multiple images using a single camera [14, parametric values for an additional software-based color 10, 22]. While using a single camera ensures a consistent mapping. Variations of these phases have previously been color response between images, the approach limits the ap- explored separately, however we believe the hardware and plicability of these methods to static scenes. Alternatively software approaches offer complementary benefits that can one can capture dynamic scenes using multiple cameras yield better results when combined. Our goal is to bring the [12, 25, 23]. However such applications require consistent response curves of several cameras closer together and as inter-camera color responses to produce artifact-free results. close as possible to a desired reference image, while also Figure 2 illustrates some artifacts in a reconstruction pro- minimizing the amount of noise in the images. duced by an implementation of the 3D reconstruction sys- Note that in color science the term photometric calibra- tem in [26]. tion is typically defined as setting a device to a particular Unfortunately most cameras—even of the same type— state characterized during a profiling process such as the do not exhibit consistent responses. Figure 1 illustrates the one described in [16] and later taken into account in order differences between the responses of 8 cameras to the 24 to achieve a desired behavior of the device. In computer colors of the GretagMacbeth [5] ColorCheckerTM chart graphics, the same term is typically used to describe the imaged under the same illumination conditions and using process of tuning a general model of the physical device to the same hardware settings. The data shows that color best describe the specific instance of the device [4]. The first values are significantly different from camera to camera. definition corresponds to our iterative closed-loop hardware This is due for example to aperture variations, fabrication calibration phase, and the second definition corresponds to variations, electrical noise, and interpolation artifacts aris- our software refinement phase. Red Green Blue 250 250 250 2 0 0 200 200 200 B l u 1 e 0 150 150 150 0 100 100 100 0 Camera values Camera values Camera values 2 0 0 50 50 50 G r e 1 e 0 n 0 250 0 200 0 0 150 0 100 200 0 100 200 0 100 200 0 100 50 d 0 Re Target values Target values Target values Figure 1. Differences in responses of 8 cameras. Left image: 3D RGB color space plot. Each colored sphere represents the position of a camera sample in the RGB color space. Each connected cluster of colored spheres corresponds to one of the 24 samples in a ColorCheckerTM chart. The size of each sphere is proportional to the intra­sample variance. The small white spheres at the origin of each cluster represent the position in the RGB color space of the corresponding target color samples. Right 3 images: The measured color values for each channel, camera and sample, plotted with respect to the corresponding target values. Each individual curve represents samples taken from a particular camera. 2. Previous Work and “mean brightness” values, and to match the image col- ors in the overlapping regions of adjacent cameras. While Previous research aimed at color consistency falls mainly these methods have the advantage that they do not require in two categories: calibrating cameras in order to obtain a color chart, they are sensitive to the choices of desired some desired response, and processing images after acqui- values. sition. Color consistency has also been studied in the con- Consistency can also be obtained by software post- text of projector displays [11], but these techniques have not processing of images. For example, [19] uses pair-wise been extended to camera systems. Other well-known cali- correlation for modeling transfer functions and a special bration techniques for printers, scanners and monitors are distance metric based on image color histograms. While described in great detail in [9]. this can produce reasonable results, its complexity increases Calibrating cameras is usually performed with respect quadratically with the number of cameras. Also, the transfer to a known target, such as a color chart with standardized functions computed by this approach may introduce distor- samples [13]. Color charts have been traditionally used in tions and quantization errors when some parts of the color photography and color research [2]. The closest work to spectrum are compressed or stretched. our method is presented in [8]. They acquire images of a color target, compensate for non-uniform lighting, adjust the gains and offsets of each color channel to calibrate each camera to a linear response, and then apply several soft- 3. The Calibration Process ware post-processing steps. They also address the scalabil- ity of calibrating a large number of cameras by automat- ically detecting the location of the color target and using special hardware attached to each camera in order to min- Our method consists of two main phases: an iterative imize traffic over the camera connections. Although their closed-loop hardware calibration phase, and a software re- calibration method is different, their other contributions are finement phase. In the first phase we search for the per- applicable to our method as well. We use an approach that camera hardware register settings that best balance linearity minimizes the differences between several camera images and dynamic range. We do this in two steps: first we opti- while also observing goals such as maintaining visual fi- mize to a known target (a 24-sample GretagMacbeth [5] delity and minimizing the signal noise. ColorCheckerT M ), and then we optimize to the average Other researchers have proposed the use of scene statis- of the results of the previous step. In the second phase we tics for single camera calibration [6]. Scene statistics are compute the per-camera parametric values for an additional used in the RingCam [15], a system for capturing panora- software-based color mapping. These two phases and the mas using multiple cameras. They change the brightness intra-phase steps are depicted in Algorithm 1, and described and gain of each camera to match the desired “black level” in more detail in the following subsections. Algorithm 1 Overall process. NS ¡ Phase 1: Closed-Loop Calibration of Hardware ~ ~ C = wjIs − Tsj + (1 − w)Vs ¢ (1) identify locations of color samples in the target image s=1 Step 1: optimize to target where C is the value of the cost function, s is the sample for each camera do number, NS is the total number of samples, I~s is the color identify locations of color samples in the camera image ~ repeat of camera image sample s, Ts is the color of target image minimize cost function with respect to target sample s, w and (1−w) are weights (we use w = 0:5). Note until (cost < threshold) or (no improvement) that colors are 3-element vectors, containing the 3 values for end for the red, green and blue channels: e.g., I~s = [Irs Igs Ibs]. Step 2: optimize to average We use square sampling windows of adjustable size, and repeat compute the sample color as an average in each color chan- compute average of all camera images nel. The intra-window sample variance Vs is computed as designate average as the new target image identify locations of color samples in the target image W S for each camera do v 2 Vs = u jI~si − I~sj (2) if cost is higher than a threshold then uX ti=1 minimize cost function with respect to target end if where i is the index of each pixel inside the sampling win- end for dow, W S is the window size, I~si is the color of pixel i of until for all cameras (cost < threshold) or (no improvement)) sample s, I~s is the average pixel color over the window. During the first step each camera will converge on some Phase 2: Software-Based Refinement for each camera do minimum cost, but the colors in the final camera images are perform software refinement typically still quite different from the target colors.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-