<<

CVPR CVPR #**** #**** CVPR 2010 Submission #****. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.

000 054 001 055 002 056 003 Device-Independent Representation of Photometric Properties of a 057 004 058 005 059 006 Anonymous CVPR submission 060 007 061 008 062 009 Paper ID **** 063 010 064 011 065 012 Abstract era change smoothly across different device configurations. 066 013 The primary advantages of this representation are the fol- 067 014 In this paper we propose a compact device-independent lowing: (a) generation of an accurate representation using 068 015 representation of the photometric properties of a camera, the measured photometric properties at a few device config- 069 016 especially the effect. This representation can urations obtained by sparsely sampling the device settings 070 017 be computed by recovering the photometric parameters at space; (b) accurate prediction of the photometric parame- 071 018 sparsely sampled device configurations(defined by aper- ters from this representation at unsampled device configu- 072 019 ture, focal length and so on). More importantly, this can rations. To the best of our knowledge, we present the first 073 020 then be used for accurate prediction of the photometric device independent representation of photometric proper- 074 021 parameters at other new device configurations where they ties of which has the capability of predicting device 075 022 have not been measured. parameters at configurations where it was not measured. 076 023 Our device independent representation opens up the op- 077 024 portunity of factory calibration of the camera and the data 078 025 1. Introduction then being stored in the camera itself. Our compact repre- 079 026 sentation requires very small storage. Further, the predictive 080 027 In this paper we present a device independent represen- nature of our model assures that properties at many different 081 028 tation of the photometric properties of a camera, in partic- settings can be accurately generated from this representa- 082 029 ular the vignetting effect. The photometric properties of a tion. Finally, our Bezier based representation can be easily 083 030 camera consist of the intensity transfer function, commonly computed using a inexpensive embedded hardware allow- 084 031 called the response curve, and the spatial intensity variation, ing on-the-fly generation of the photometric properties at 085 032 marked by a characteristic intensity fall-off from the center different device settings which can then be used to correct 086 033 to fringe, commonly called the vignetting effect. the imagery in real-time. 087 034 Cameras are common today in many applications in- 088 035 cluding scene reconstruction, surveillance, object recogni- 2. Related Work 089 036 tion and target detection. They are also used extensively in 090 037 many projector camera applications like scene capture, 3D There are many methods that use a parametric model to 091 038 reconstruction, virtual reality, tiled displays, and so on. In represent and estimate the camera response curve [16, 18, 092 039 a research and development environment the same camera 7, 25, 20, 10, 11,6]. [4, 15] use a non-parametric model 093 040 may be used for several different applications and differ- for estimating the response curve. Other methods estimate 094 041 ent settings. But for each particular scenario, we need to the vignetting effect of a camera when the response curve 095 042 calibrate this camera anew when properties like vignetting is known using a parametric model [1, 29, 28, 26,2, 23, 096 043 effect or response curve depend only on device parameters 3,5, 30]. The problem of estimating an unknown response 097 044 and change very slowly, if at all, with time. Photometric curve and the vignetting effect of the camera is an undercon- 098 045 calibrations are often time consuming and may need spe- strained one. [12, 13,9] estimate both the response curve 099 046 cific patterns and conditions which are hard to generate (e.g. and the vignetting effect simultaneously using parametric 100 047 diffused object illuminated in a diffused manner). models, and can recover the vignetting effect only up to an 101 048 We design a compact device-independent representa- exponential ambiguity. 102 049 tion of the non-parametric photometric parameters in a The changes in the photometric properties of a camera 103 050 space spanned by the device settings like aperture and fo- depend on a large number of unforseen physical parame- 104 051 cal lengths. We propose a higher-dimensional Bezier patch ters, especially in today’s inexpensive highly miniaturized 105 052 for our device-independent representation based on the crit- devices which often undergo adverse optical aberrations due 106 053 ical observation that the photometric parameters of a cam- to low quality material properties. Hence, almost all of the 107

1 CVPR CVPR #**** #**** CVPR 2010 Submission #****. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.

108 162 109 163 110 164 111 165 112 166 113 167 114 168 115 169 116 170 117 171 118 172 119 173 120 174 121 (a) (b) 175 122 176 123 177 124 178 125 179 126 180 127 181 128 182 129 183 130 184 131 185 132 186 133 187 134 188 135 (c) (d) 189 136 Figure 1. The camera vignetting effect estimated at (a) f/16, (b) f/8, (c) f/4 and (d) f/2.8. Note that the vignetting effect gets more 190 137 pronounced as the aperture size increases from (b) to (d). 191 138 192 139 193 140 models for the photometric properties mentioned above are vignetting effects of both the camera and the projector si- 194 141 device-dependent physically-based model of the response multaneously. These methods use non-parametric represen- 195 142 curve and the vignetting effect. Accounting for all these tations of the photometric properties particularly suited for 196 143 physical influences accurately is very difficult in a device- our representation. For methods that use parametric repre- 197 144 dependent model. Hence, many simplifying assumptions sentation, a intermediate non-parametric representation can 198 145 are made (like principal center coincides with the image be easily generated to make it conducive to our device inde- 199 146 center, the vignetting is a polynomial function of the radius pendent representation. 200 147 and so on), which are often not true physically. 201 148 Contrary to these models, we seek a device-independent 3. Device-Independent Representation 202 149 representation defined in a multi-dimensional space 203 150 spanned by the several controllable device settings like In this section we present a new and efficient device in- 204 151 aperture and focal length. Hence, our model precludes dependent representation of the photometric properties of a 205 152 any such assumptions common in device dependent mod- camera. 206 153 els. Since it depends only on the device settings, it is easily We seek a device-independent representation with the 207 154 scalable to large number of device settings. Our method following goals. First, the representation should be defined 208 155 can easily be applied when the estimated parameters are in in a multi-dimensional space spanned by the controllable 209 156 non-parametric form as in [4, 15]. More recently, there have device settings like aperture and focal length. We call this 210 157 been works which constrain the under-constrained problem space as the device settings space. Second, the representa- 211 158 of finding response curve and vignetting effect of a camera tion should be compact. Third, it should be easy to compute 212 159 by introducing a projector in a closed feedback loop where the parameters of the representation by estimating the pho- 213 160 the projected input images seen by the camera are known tometric properties (response curve and vignetting effect) 214 161 [8, 24]. These can thus estimate the response curves and at sparsely sampled points in the device settings space. Fi- 215

2 CVPR CVPR #**** #**** CVPR 2010 Submission #****. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.

216 0 0 270 nally, the photometric parameters at new points on the de- where u,v,f and a are normalized in [0, 1], Pi,j,k,l are the 217 271 vice settings space (where it was not measured) should be control points of the Bezier.´ Bi’s are the Bernstein polyno- 218 predicted easily and accurately from this compact represen- mials described by: 272 219 tation. 273 220   274 Nu i Nu−i Our representation of photometric camera properties and Bi(u) = u (1 − u) (1) 221 its efficient one-time evaluation can have a significant im- i 275 222 pact in the performance of projector-camera or capture sys- 276 Note that in the above equations, Nu, Nv and Na and Nf 223 tems. Currently, the camera used in such systems are not 277 224 can be different. This assures that the smoothness of the 278 corrected for their vignetting effect due to difficult calibra- 0 0 225 Bezier´ can be different in the domains of u, v, f and a . 279 tion processes. So, usually only a small central part of the For example, in a very high-end camera, if the vignetting is 226 image is considered to avoid whatever vignetting effect is 280 very limited, it may suffice to have Nu = Nv = 3, but it 227 present. To avoid this inefficiency in making use of the 281 228 may change over the aperture setting to result in Na = 4. 282 whole resolution of the camera, more often the camera is Our method involves three steps: (a) First, the device 229 set to low aperture (f/16 or lower) [4] where the vignetting 283 230 settings space spanned by f and a is sparsely sampled. The 284 is considered insignificant. But this results in noisy im- self calibration method mentioned in the previous section is 231 ages due to low . Allowing more light without chang- 285 232 used to estimate the vignetting effect at these sparsely sam- 286 ing the aperture setting is only possible via a higher shutter pled device settings. (b) Second, the device parameters f 233 speed. But, in this case, the camera cannot tolerate move- 287 234 and a are parameterized between [0, 1] to yield the parame- 288 ments leading to motion-blurred images. Our method can ters f 0 and a0 respectively that will be used to parameterize 235 now enable using factory-calibrated cameras in any setting 289 236 the fitting of the Bezier.´ (c) Third, a four dimensional Bezier´ 290 appropriate for the demands made by the application. surface C(u, v, f 0, a0) is fitted to this data and the control 237 Of the photometric properties, the response curve is spa- 291 points, Pi,j,k,l are estimated from the fitted function. (c) 238 tially invariant and the vignetting effect is input invariant. 292 239 Finally, the vignetting effect at an unknown device configu- 293 Usually vignetting shows changes from one device set- 0 0 240 ration is predicted by evaluating the fitted C(u, v, f , a ) at 294 ting to another, while the response curve remains the same the new device settings. 241 [4, 14]. So, in our discussion, we only consider the vi- 295 242 gnetting effect assuming a compact polynomial represen- 3.1. Parametrization 296 243 tation for the 1D response curve. First, we describe our 297 244 method for two controllable device settings in our cameras, The parametrization of device settings parameters f and 298 245 focal length and aperture. This will be followed by a gener- a to the normalized parameters pf and pa that range be- 299 246 alization of the scheme to allow more device settings. Fig- tween 0 and 1 is critical to achieve a good fit for C and is 300 247 ure1 provides an example of the input to our method – the described in details in this section. 301 248 vignetting effect of the camera at different aperture settings. Let us assume that the device settings parameter f and 302 249 We desire to represent the vignetting effect of the camera a are sampled at F and A values respectively. These are 303 250 as a function C(u, v, f 0, a0) that returns the vignetting effect f1, f2, . . . , fF and a1, a2, . . . , aA respectively. Thus, the 304 251 at spatial coordinate (u, v) for device settings parameters total number of sampled device configurations in the device 305 252 of focal length, f, and aperture, a. We call the 2D space settings space is given by F ×A. We estimate the vignetting 306 253 spanned by f and a as the device settings space. f 0 and a0 effect using the self-calibration technique at each of the F × 307 254 are functions of f and a and provide a parametrization of A device configurations. 308 255 the device settings space most suited for finding C. This We first consider iso-parametric curves for f i.e. a curve 309 256 will be explained in details in Section 3.1. where a is constant. Note that there will be A such iso- 310 257 Our representation for the vignetting effect C is based on parametric curves. Let us consider one such iso-parametric 311 258 the critical observation that it changes smoothly and coher- curve where a = a1. The distance between adjacent nor- 312 259 ently with change in device settings and with spatial coor- malized parametric values at (fi, a1) and (fi+1, a1) on this 313 260 dinates. Hence, we choose a 4 dimensional Bezier´ surface curve is assigned in such a way that they are proportional 314 261 to represent this compactly. We define the Bezier´ surface to the distance between the two dimensional surfaces rep- 315 262 resenting the vignetting effect at these two device settings 316 C(u, v, pf , pa) as 263 of (fi, a1) and (fi+1, a1), estimated using the self calibra- 317 N 264 Nu Nu f Na tion technique. For measuring the distance between two 318 X X X X 0 0 265 Bi(u)Bj(v)Bk(f )Bl(a )Pi,j,k,l surfaces we used the L∞ distance (maximum distance) be- 319 266 i=0 j=0 k=0 l=0 tween them. Note that this parametrization is a non-linear 320 267 321 Nu Nv Nf Na monotonically increasing function of both f and a. We de- 268 X X X X 0 0 note this by f 0 = p (f, a) and a0 = p (f, a). Further, a 322 = Bi,j,k,l(u, v, f , a )Pi,j,k,l f a 269 i=0 j=0 k=0 l=0 uniform sampling of the device settings space spanned by f 323

3 CVPR CVPR #**** #**** CVPR 2010 Submission #****. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.

324 Sampled device parameter space and the corresponding percentage error (mean, 378 std) of the fitted Bezier from the estimated vignetting effect at the sampled points Verification of the predicted results using percentage error (mean, std) of 325 the predicted vignetting from the estimated vignetting effect 379 (0.31%, 0.24%) (0.24%, 0.21%) (0.35%, 0.48%) (0.61%, 0.65%) (1.25%, 1.08%) 326 75 (1.60%, 0.62%) (5.62%, 2.89%) (1.10%, 0.84%) 380 75

327 70 381 328 70 382 65 (0.34%, 0.30%) (0.36%, 0.26%) (1.42%, 1.10%) (1.22%, 0.90%) (0.88%, 1.13%) 329 65 (1.17%, 0.99%) (1.21%, 0.81%) (1.57%, 1.43%) 383 60 330 60 384 55 55 331 (1.94%, 1.27%) (1.26%, 0.97%) (2.13%, 1.58%) 385 50 50 332 (1.61%, 1.15%) (1.15%, 0.89%) (2.32%, 1.38%) (2.04%, 2.00%) (1.56%, 1.03%) 386 45 45 Focal Length (mm)

333 Focal Length (mm) 387 334 40 40 388 (1.56%, 1.47%) (1.63%, 1.06%) (2.88%, 1.75%) (1.53%, 0.99%) (1.45%, 1.83%) (3.02%, 2.26%) (2.46%, 1.69%) (1.74%, 1.51%) 335 35 35 389 336 30 (0.43%, 0.41%) (0.68%, 0.53%) (0.66%, 0.58%) (1.06%, 0.97%) (1.27%, 1.23%) 30 (2.26%, 1.50%) (1.75%, 1.31%) (1.19%, 0.90%) 390 337 25 25 391

338 f/28 f/14f/9.3 f/7 f/5.6f/4.6 f/4 f/3.5 f/3.1 f/2.8 f/28 f/14f/9.3 f/7 f/5.6f/4.6 f/4 f/3.5 f/3.1 f/2.8 392 339 393 Aperture(a) Aperture(b) 340 394 Figure 2. (a) This plot shows the sampling of the devices parameter space using for recovering the Bezier function and the corresponding 341 395 mean and std error of the fitted vignetting from the estimated vignetting at the sampled settings. (b) This plot shows the device param- 342 eters where we verified our prediction method and the corresponding mean and std errors of the predicted vignetting from the estimated 396 343 vignetting at these settings. 397 344 398 345 399 346 400 347 401 348 402 349 403 350 404 351 405 352 406 353 407 354 408 (a) (b) (c) 355 409 356 Figure 3. Comparison of vignetting correction: (a) uncorrected image captured at f/3.3 (b) corrected image using actual vignetting effect 410 357 from calibration (c) corrected image using predicted vignetting effect. 411 358 412 359 and a does not assure an uniform sampling of the normal- 3.3. Experiments 413 360 ized parameter space spanned by f 0 and a0. 414 361 We have experimented our representation on two cam- 415 0 0 362 After f and a are parameterized to f and a as above, we eras: a high-resolution (3.4-13.5 Megapixels) expensive 416 363 fit a four dimensional Bezier´ using linear least squares to es- camera like the Kodak DCS and an low-resolution (VGA) 417 0 0 364 timate C(u, v, f , a ) and the control points Pi,j,k,l thereof. Point Grey camera with similar results. We found that a 418 365 surface of at least degree 4 is needed in both cases. Hence, 419 366 Nu = Nv = N1 = N2 = 4. 420 367 3.2. Prediction To reconstruct the Bezier´ function C(u, v, f 0, a0), we 421 368 used 5 samples of the aperture settings at f/22, f/11, f/5.6, 422 369 Once the function C(u, v, f 0, a0) is estimated, we use f/3.3 and f/2.8 (A = 5) and four samples of focal length 423 370 it to predict the vignetting at a new device configuration at 28mm, 35mm, 62mm and 75mm (F = 4). Thus, we 424 371 (fnew, anew). The most critical aspect of this predic- estimate the vignetting effect using our self-calibration pro- 425 372 tion is the parametrization of these new device settings cedure at F × A = 20 device configurations. This sampled 426 0 0 373 (fnew, anew) to (fnew, anew) that will yield accurate pre- device settings are illustrated in Figure2(a) demonstrating 427 374 diction from the fitted C(u, v, f 0, a0). This in turn depends that estimation of the vignetting effect is required only at a 428 375 on how the functions pf and pa are approximated. We tried few sparse samples in the device settings space. We assume 429 376 both global and local approximation techniques with similar the vignetting effect at a smaller aperture, say f/32.0, is neg- 430 377 results. These are summarized in the next section. ligible and include a flat surface at this aperture for our data 431

4 CVPR CVPR #**** #**** CVPR 2010 Submission #****. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.

432 Table 1. Percentage error of the fitted four dimensional Bezier´ vignetting at the same device settings and compared it with 486 433 function of degree 4 from the vignetting effect estimated using the the predicted vignetting. To recover the photometric proper- 487 434 self-calibration method. 488 focal length f/Stop Mean STD ties we use the method presented by [8] for self-calibrating 435 a projector-camera pair. Table2 and3 presents the percent- 489 436 28mm f/2.8 1.27% 1.23% 490 28mm f/3.3 1.06% 0.97% age error of our predicted vignetting from the estimated one 437 for the global and local fitting respectively. The results are 491 438 28mm f/5.6 0.66% 0.58% 492 28mm f/11.0 0.68% 0.53% comparable and both show an average mean error of less 439 than 2% and the average standard deviation from the mean 493 440 28mm f/22.0 0.43% 0.41% 494 35mm f/2.8 1.45% 1.83% of less the 1.5%. The errors of Table2 is also plotted with 441 the associated device settings in Figure2(b). A visual com- 495 442 35mm f/3.3 1.53% 0.99% 496 35mm f/5.6 2.88% 1.75% parison is available in Figure3 where a captured image is 443 corrected using both the predicted and estimated vignetting. 497 444 35mm f/11.0 1.63% 1.06% 498 35mm f/22.0 0.56% 0.47% Note that the difference between the two is visually indis- 445 tinguishable. 499 446 62mm f/2.8 0.88% 1.13% 500 62mm f/3.3 1.22% 0.90% 447 Table 2. Percentage error in predicted vignetting using global fit- 501 62mm f/5.6 1.42% 1.10% 448 ting for interpolating normalized parameters 502 449 62mm f/11.0 0.36% 0.26% focal length f/Stop Mean STD 503 62mm f/22.0 0.34% 0.30% 450 28mm f/4.0 1.19% 0.90% 504 451 75mm f/2.8 1.25% 1.08% 28mm f/8.0 1.75% 1.31% 505 452 75mm f/3.3 1.61% 0.65% 28mm f/16.0 2.26% 1.50% 506 75mm f/5.6 0.35% 0.48% 453 35mm f/4.0 1.74% 1.51% 507 75mm f/11.0 0.24% 0.21% 454 35mm f/8.0 2.46% 1.69% 508 75mm f/22.0 0.31% 0.24% 455 35mm f/16.0 3.02% 2.26% 509 456 Average 0.84% 0.72% 50mm f/2.8 1.56% 1.03% 510 457 50mm f/3.3 2.04% 2.00% 511 458 50mm f/4.0 2.13% 1.58% 512 459 fitting. Table1 presents the percentage error of our fitted 50mm f/5.6 2.32% 1.38% 513 460 data from the estimated vignetting effects at these sampled 50mm f/8.0 1.26% 0.97% 514 461 device settings which are also plotted in Figure2(a). Our 50mm f/11.0 1.15% 0.89% 515 462 average mean error is less than 1% with the average stan- 50mm f/16.0 1.94% 1.27% 516 463 dard deviation being also less then 1%. The low mean and 50mm f/22.0 1.61% 1.15% 517 standard deviation from the mean of this error justifies our 464 62mm f/4.0 1.57% 1.43% 518 use of a Bezier´ representation. 465 62mm f/8.0 1.21% 0.81% 519 C(u, v, f 0, a0) 466 Next, we use the recovered to predict the 62mm f/16.0 1.17% 0.99% 520 vignetting effect at several new device settings. These de- 467 75mm f/4.0 1.10% 0.84% 521 vice settings are shown in Figure2(b). To parameterize the 468 75mm f/8.0 5.62% 2.89% 522 new device settings parameters to the new normalized pa- 469 75mm f/16.0 1.60% 0.62% 523 470 rameters, we interpolate the approximation of pf and pa 524 Average 1.96% 1.35% 471 using two different techniques. 525 472 Global fitting: In the first case, we used the computed 526 473 (f 0, a0) at all device settings where the photometric param- Finally, a projector is a dual of a camera [22]. Hence we 527 474 eters are measured i.e. {(f, a)|f ∈ {f1, f2, . . . , fF }, a ∈ anticipate our representation, parametrization and predic- 528 475 {a1, a2, . . . , aA}} (see (Figure2(a)) to approximate pf and tion to work equally well for a the photometric properties of 529 476 pa with a globally fitted 2D Bezier´ function, as illustrated a projector. However, currently it is difficult to experiment 530 477 in. Figure4.AB ezier´ of degree 3 was sufficient for this in projectors unless we get digitally controlled zoom and fo- 531 478 purpose. Next we evaluate this fitted function at the new cus capability, only available today in high end projectors. 532 479 device parameters (fnew, anew) to generate the new nor- We believe that the same paradigm can be extended to find 533 0 0 480 malized parameters (fnew, anew). a compact predictive representation of projector vignetting 534 481 Local fitting: In this we use a local cubic interpolation function as the orientation of the projector with respect to 535 482 of the (f 0, a0) to generate the new normalized parameters the screen changes or the distance of the projector with re- 536 0 0 483 (fnew, anew). Thus, in this case the function pf and pa are spect to the screen changes. Hence, our device independent 537 484 approximated by piecewise cubic patches. representation can be applied in other projector and camera 538 485 To verify the accuracy of our prediction, we estimate the related problems. In fact, we use this representation to store 539

5 CVPR CVPR #**** #**** CVPR 2010 Submission #****. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.

540 594 541 595 542 596 543 597 544 598 545 599 546 600 547 601 548 602 549 603 550 604 551 605 552 606 553 607 554 608 (a) (b) 555 609 Figure 4. These shows the plot of the function F (a) and F (b) respectively corresponding to device parameters aperture and focal length 556 1 2 610 respectively. These are represented using 2D Bezier´ patches of degree 3. 557 611 558 612 559 Table 3. Percentage error in predicted vignetting using local fitting 3.4. Handling Higher Dimensional Device Setting 613 560 for interpolating normalized parameters. Space 614 561 focal length f/Stop Mean STD 615 562 28mm f/4.0 2.26% 1.13% The above representation can be easily generalized 616 563 28mm f/8.0 1.71% 1.15% for a device settings space that involves more that 617 564 28mm f/16.0 4.61% 2.84% two parameters. For a M dimensional device settings 618 565 35mm f/4.0 1.54% 1.10% space spanned by M device parameters d1, d2, . . . , dM , 619 566 35mm f/8.0 2.39% 1.43% the representation will be a M + 2 dimensional 620 0 0 0 0 567 35mm f/16.0 4.43% 1.92% Bezier´ given by C(u, v, d1, d2, . . . , dM ), where dk = 621 p (d , d , . . . , d ), 1 ≤ k ≤ M 568 50mm f/2.8 1.45% 1.06% dk 1 2 M . Also, in this general 622 0 569 50mm f/3.3 1.28% 1.28% case, during prediction using global fitting, the dk will be 623 570 50mm f/4.0 2.11% 1.43% approximated using a M dimensional Bezier´ function 624 50mm f/5.6 1.53% 0.86% 571 3.5. Discussion 625 572 50mm f/8.0 0.94% 0.82% 626 573 50mm f/11.0 0.72% 0.56% The choice of Bezier´ surface was motivated by the goal 627 574 50mm f/16.0 0.55% 0.65% of a general representation. First, Bezier´ surfaces provides 628 575 50mm f/22.0 0.57% 0.44% the ability to represent a large number of smooth variations 629 576 62mm f/4.0 1.41% 1.06% of several orders and dimensions. Thus they are flexible. By 630 577 62mm f/8.0 1.40% 0.93% changing just a single parameter, the degree of the Bezier´ 631 578 62mm f/16.0 0.51% 0.31% surface, one can capture the different rate of change in the 632 579 75mm f/4.0 1.05% 0.83% smoothness of the surface in different devices. Smooth sur- 633 580 75mm f/8.0 4.91% 2.73% faces have a lower degree while surfaces that are not as 634 581 75mm f/16.0 0.27% 0.21% smooth have a higher degree. Also, as mentioned below 635 582 Average 1.75% 1.10% the smoothness of the photometric properties can vary dif- 636 583 ferently with different device settings (different parameters 637 584 of the Bezier)´ and can also be easily captured by changing 638 585 the degree of the constituting Bernstein polynomials. 639 586 Second, Bezier´ surfaces provide a very compact repre- 640 587 sentation. For example, in a 6 megapixel camera where the 641 588 captured images are 3000 × 2000 pixels, to store the vi- 642 589 gnetting effect for 100 apertures in a non-parametric repre- 643 590 our projector vignetting functions for a multi-based display sentation, one would need to store 100 × 3000 × 2000 = 644 591 applications [?, ?]. However, we do not use it yet for pre- 600, 000, 000 floating point numbers. If each floating point 645 592 diction since we do not have a way to digitally control the number occupies 4 bytes, this is 9100 MB. If we resort to 646 593 settings of our inexpensive commodity projectors. a existing parametric notation of polynomial representation 647

6 CVPR CVPR #**** #**** CVPR 2010 Submission #****. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.

648 702 (degree 4), this would need storing 5 × 100 = 500 floating [8] R. Juang and A. Majumder. Photometric self-calibration of a 649 point numbers. But, a quadric Bezier´ surface representa- projector-camera system. IEEE CVPR Workshop on Projec- 703 650 tion would represent this with only 125 floating point num- tor Camera Systems, 2007.2,5 704 651 bers. The compactness is exploited more with the increase [9] S. Kim and M. Pollefeys. Radiometric self-alignment of im- 705 652 in the number of device parameters. This compact repre- age sequences. Proceedings of IEEE Conference on Com- 706 653 sentation opens up the possibility of one time calibration puter Vision and Pattern Recognition, 2004.1 707 [10] S. Lin, J. Gu, S. Yamazaki, , and H. Shum. Radiometric 654 (maybe in the factory prior to the sale of the product) and 708 calibration from a single image. Proceedings of IEEE Con- 655 compact storage of photometric properties in the camera it- 709 656 ference on Vision and Pattern Recognition, pages 710 self, maybe in the form of a look-up-table(LUT). Further, 938–945, 2004.1 657 one can even imagine storing these properties in the lens it- 711 658 [11] S. Lin and L. Zhang. Determining the radiometric response 712 self. Currently, lenses use some small amount of memory function from a single grayscale image. Proceedings of IEEE 659 713 to store focal length, aperture and other settings. Our repre- Conference on Computer Vision and Pattern Recognition, 660 714 sentation, being small, can be accommodated even in such pages 66–73, 2005.1 661 a limited storage. [12] A. Litvinov and Y. Schechner. Addressing radiometric non- 715 662 Finally, the prediction and data-fitting calculations re- idealities: A unified framework. Proceedings of IEEE Con- 716 663 lated to a Bezier´ surface is relatively simple and can be ference on Computer Vision and Pattern Recognition, pages 717 664 easily embedded in hardware. The Bezier´ parameters can 52–59, 2005.1 718 665 be embedded as the metadata to the image along with other [13] A. Litvinov and Y. Schechner. Radiometric framework for 719 image mosaicking. Journal of Optical Society of America, 666 common parameters like focal length, white balance and so 720 667 22:839–848, 2005.1 721 on. The embedded hardware can then automatically cor- [14] A. Majumder and R. Stevens. Color nonuniformity in 668 rect for the artifacts on-the-fly or can be used for software 722 669 projection-based displays: Analysis and solutions. IEEE 723 postprocessing. Transactions on Visualization and Computer Graphics 670 , 724 10(2), March–April 2003.3 671 725 [15] S. Mann and R. Mann. Quantigraphic imaging: Estimating 672 4. Conclusion 726 the camera response and exposures from differently exposed 673 In conclusion, we have presented a simple device inde- images. Proceedings of IEEE Conference on Computer Vi- 727 674 pendent representation of the photometric properties of a sion and Pattern Recognition, pages 842–849, 2001.1,2 728 675 camera. This opens up the potential of embedding these [16] S. Mann and R. Picard. On being undigital with digital cam- 729 676 parameters and a process to correct images using them ap- eras: Extending dynamic range by combining differently ex- 730 677 propriately within the camera hardware itself. posed pictures. Proc. IS&T 46th annual conference, pages 731 678 422–428, 1995.1 732 [17] V. Masselus, P. Peers, P. Dutre, and Y. D. Willems. Relight- 679 References 733 680 ing with 4d incident light fields. Proceedings of ACM SIG- 734 GRAPH), 2003. 681 [1] N. Asada, A. Amano, and M. Baba. Photometric calibration 735 [18] T. Mitsunaga and S. Nayar. Radiometric self calibration. 682 of systems. Proceedings of the 13th International 736 Proceedings of IEEE Conference on Computer Vision and 683 Conference on Pattern Recognition, 1, 1996.1 737 Pattern Recognition, 1999.1 684 [2] C. M. Bastuscheck. Correction of video camera response 738 [19] S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar. using digital techniques. Journal of Optical Engineering, 685 Fast separation of direct and global components of a scene 739 26:1257–1262, 1987.1 686 using high frequency illumination. ACM Transactions on 740 687 [3] Y. P. Chen and B. Mudunuri. An anti-vignetting technique Graphics (SIGGRAPH), 25(3), 2003. 741 for superwide field of view mosaicked images. Journal of 688 [20] C. Pal, R. Szeliski, M. Uyttendaele, and N. Jojic. Probabil- 742 Imaging Technology, 12:293–295, 1986.1 689 ity models for high dynamic range imaging. Proceedings of 743 690 [4] P. E. Debevec and J. Malik. Recovering high dynamic range IEEE Conference on Computer Vision and Pattern Recogni- 744 radiance maps from photographs. Proceedings of ACM Sig- 691 tion, pages 173–180, 2004.1 745 graph, pages 369–378, 1997.1,2,3 692 [21] C. Pinhanez, M. Podlaseck, R. Kjeldsen, A. Levas, G. Pin- 746 [5] D. Goldman and J. Chen. Vignette and exposure calibration 693 gali, and N. Sukaviriya. Ubiquitous interactive displays in 747 and compensation. Proceedings of IEEE International Con- Proceedings of SIGGRAPH Sketches 694 a retail environment. , 748 ference on Computer Vision, pages 899–906, 2005.1 2003. 695 749 [6] M. Grossberg and S. Nayar. Determining the camera re- [22] R. Raskar, G. Welch, M. Cutts, A. Lake, L. Stesin, and 696 750 sponse from images: What is knowable? IEEE Transac- H. Fuchs. The office of the future: A unified approach to 697 tion on Pattern Analysis and Machine Intelligence, 25:1455– image based modeling and spatially immersive display. In 751 698 1467, 2003.1 Proceedings of ACM Siggraph, pages 168–176, 1998.5 752 699 [7] M. Grossberg and S. Nayar. Modeling the space of camera [23] A. A. Sawchuk. Real-time correction of intensity nonlinear- 753 700 response functions. IEEE Transaction on Pattern Analysis ities in imaging systems. IEEE Transactions on , 754 701 and Machine Intelligence, 26:1272–1282, 2004.1 26:34–39, 1977.1 755

7 CVPR CVPR #**** #**** CVPR 2010 Submission #****. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.

756 [24] P. Song and T. J. Cham. A theory for photometric self 810 757 calibration of multiple overlapping projectors and cameras. 811 758 IEEE CVPR Workshop on Projector Camera Systems, 2005. 812 759 2 813 760 [25] Y. Tsin, V. Ramesh, and T. Kanade. Statistical calibration of 814 761 the ccd imaging process. Proc. IEEE International Confer- 815 762 ence on Computer Vision, pages 480–487, 2001.1 816 763 [26] M. Uyttendaele, A. Criminisi, S. B. Kang, S. Winder, 817 764 R. Hartley, and R. Szeliski. Image-based interactive explo- 818 765 ration of real-world environments. IEEE Computer Graphics 819 766 and Applications, 24:52–63, 2004.1 820 767 [27] R. Yang, A. Majumder, and M. Brown. Camera based cal- 821 ibration techniques for seamless multi-projector displays. 768 822 IEEE Transactions on Visualization and Computer Graph- 769 823 ics, 11(2), March-April 2005. 770 824 [28] W. Yu. Practical anti-vignetting methods for digital cameras. 771 IEEE Transactions on Consumer Electronics, 50:975–983, 825 772 2004.1 826 773 [29] W. Yu, Y. Chung, and J. Soh. Vignetting distortion correction 827 774 method for high quality digital imaging. Proceedings IEEE 828 775 International Conf. on Pattern Recognition, pages 666–669, 829 776 2004.1 830 777 [30] Y. Zheng, S. Lin, and S. B. Kang. Single-image vignetting 831 778 correction. Proceedings of IEEE Conference on Computer 832 779 Vision and Pattern Recognition, pages 461–468, 2006.1 833 780 834 781 835 782 836 783 837 784 838 785 839 786 840 787 841 788 842 789 843 790 844 791 845 792 846 793 847 794 848 795 849 796 850 797 851 798 852 799 853 800 854 801 855 802 856 803 857 804 858 805 859 806 860 807 861 808 862 809 863

8