Device-Independent Representation of Photometric Properties of a Camera
Total Page:16
File Type:pdf, Size:1020Kb
CVPR CVPR #**** #**** CVPR 2010 Submission #****. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE. 000 054 001 055 002 056 003 Device-Independent Representation of Photometric Properties of a Camera 057 004 058 005 059 006 Anonymous CVPR submission 060 007 061 008 062 009 Paper ID **** 063 010 064 011 065 012 Abstract era change smoothly across different device configurations. 066 013 The primary advantages of this representation are the fol- 067 014 In this paper we propose a compact device-independent lowing: (a) generation of an accurate representation using 068 015 representation of the photometric properties of a camera, the measured photometric properties at a few device config- 069 016 especially the vignetting effect. This representation can urations obtained by sparsely sampling the device settings 070 017 be computed by recovering the photometric parameters at space; (b) accurate prediction of the photometric parame- 071 018 sparsely sampled device configurations(defined by aper- ters from this representation at unsampled device configu- 072 019 ture, focal length and so on). More importantly, this can rations. To the best of our knowledge, we present the first 073 020 then be used for accurate prediction of the photometric device independent representation of photometric proper- 074 021 parameters at other new device configurations where they ties of cameras which has the capability of predicting device 075 022 have not been measured. parameters at configurations where it was not measured. 076 023 Our device independent representation opens up the op- 077 024 portunity of factory calibration of the camera and the data 078 025 1. Introduction then being stored in the camera itself. Our compact repre- 079 026 sentation requires very small storage. Further, the predictive 080 027 In this paper we present a device independent represen- nature of our model assures that properties at many different 081 028 tation of the photometric properties of a camera, in partic- settings can be accurately generated from this representa- 082 029 ular the vignetting effect. The photometric properties of a tion. Finally, our Bezier based representation can be easily 083 030 camera consist of the intensity transfer function, commonly computed using a inexpensive embedded hardware allow- 084 031 called the response curve, and the spatial intensity variation, ing on-the-fly generation of the photometric properties at 085 032 marked by a characteristic intensity fall-off from the center different device settings which can then be used to correct 086 033 to fringe, commonly called the vignetting effect. the imagery in real-time. 087 034 Cameras are common today in many applications in- 088 035 cluding scene reconstruction, surveillance, object recogni- 2. Related Work 089 036 tion and target detection. They are also used extensively in 090 037 many projector camera applications like scene capture, 3D There are many methods that use a parametric model to 091 038 reconstruction, virtual reality, tiled displays, and so on. In represent and estimate the camera response curve [16, 18, 092 039 a research and development environment the same camera 7, 25, 20, 10, 11,6]. [4, 15] use a non-parametric model 093 040 may be used for several different applications and differ- for estimating the response curve. Other methods estimate 094 041 ent settings. But for each particular scenario, we need to the vignetting effect of a camera when the response curve 095 042 calibrate this camera anew when properties like vignetting is known using a parametric model [1, 29, 28, 26,2, 23, 096 043 effect or response curve depend only on device parameters 3,5, 30]. The problem of estimating an unknown response 097 044 and change very slowly, if at all, with time. Photometric curve and the vignetting effect of the camera is an undercon- 098 045 calibrations are often time consuming and may need spe- strained one. [12, 13,9] estimate both the response curve 099 046 cific patterns and conditions which are hard to generate (e.g. and the vignetting effect simultaneously using parametric 100 047 diffused object illuminated in a diffused manner). models, and can recover the vignetting effect only up to an 101 048 We design a compact device-independent representa- exponential ambiguity. 102 049 tion of the non-parametric photometric parameters in a The changes in the photometric properties of a camera 103 050 space spanned by the device settings like aperture and fo- depend on a large number of unforseen physical parame- 104 051 cal lengths. We propose a higher-dimensional Bezier patch ters, especially in today’s inexpensive highly miniaturized 105 052 for our device-independent representation based on the crit- devices which often undergo adverse optical aberrations due 106 053 ical observation that the photometric parameters of a cam- to low quality material properties. Hence, almost all of the 107 1 CVPR CVPR #**** #**** CVPR 2010 Submission #****. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE. 108 162 109 163 110 164 111 165 112 166 113 167 114 168 115 169 116 170 117 171 118 172 119 173 120 174 121 (a) (b) 175 122 176 123 177 124 178 125 179 126 180 127 181 128 182 129 183 130 184 131 185 132 186 133 187 134 188 135 (c) (d) 189 136 Figure 1. The camera vignetting effect estimated at (a) f/16, (b) f/8, (c) f/4 and (d) f/2.8. Note that the vignetting effect gets more 190 137 pronounced as the aperture size increases from (b) to (d). 191 138 192 139 193 140 models for the photometric properties mentioned above are vignetting effects of both the camera and the projector si- 194 141 device-dependent physically-based model of the response multaneously. These methods use non-parametric represen- 195 142 curve and the vignetting effect. Accounting for all these tations of the photometric properties particularly suited for 196 143 physical influences accurately is very difficult in a device- our representation. For methods that use parametric repre- 197 144 dependent model. Hence, many simplifying assumptions sentation, a intermediate non-parametric representation can 198 145 are made (like principal center coincides with the image be easily generated to make it conducive to our device inde- 199 146 center, the vignetting is a polynomial function of the radius pendent representation. 200 147 and so on), which are often not true physically. 201 148 Contrary to these models, we seek a device-independent 3. Device-Independent Representation 202 149 representation defined in a multi-dimensional space 203 150 spanned by the several controllable device settings like In this section we present a new and efficient device in- 204 151 aperture and focal length. Hence, our model precludes dependent representation of the photometric properties of a 205 152 any such assumptions common in device dependent mod- camera. 206 153 els. Since it depends only on the device settings, it is easily We seek a device-independent representation with the 207 154 scalable to large number of device settings. Our method following goals. First, the representation should be defined 208 155 can easily be applied when the estimated parameters are in in a multi-dimensional space spanned by the controllable 209 156 non-parametric form as in [4, 15]. More recently, there have device settings like aperture and focal length. We call this 210 157 been works which constrain the under-constrained problem space as the device settings space. Second, the representa- 211 158 of finding response curve and vignetting effect of a camera tion should be compact. Third, it should be easy to compute 212 159 by introducing a projector in a closed feedback loop where the parameters of the representation by estimating the pho- 213 160 the projected input images seen by the camera are known tometric properties (response curve and vignetting effect) 214 161 [8, 24]. These can thus estimate the response curves and at sparsely sampled points in the device settings space. Fi- 215 2 CVPR CVPR #**** #**** CVPR 2010 Submission #****. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE. 216 0 0 270 nally, the photometric parameters at new points on the de- where u,v,f and a are normalized in [0; 1], Pi;j;k;l are the 217 271 vice settings space (where it was not measured) should be control points of the Bezier.´ Bi’s are the Bernstein polyno- 218 predicted easily and accurately from this compact represen- mials described by: 272 219 tation. 273 220 274 Nu i Nu−i Our representation of photometric camera properties and Bi(u) = u (1 − u) (1) 221 its efficient one-time evaluation can have a significant im- i 275 222 pact in the performance of projector-camera or capture sys- 276 Note that in the above equations, Nu, Nv and Na and Nf 223 tems. Currently, the camera used in such systems are not 277 224 can be different. This assures that the smoothness of the 278 corrected for their vignetting effect due to difficult calibra- 0 0 225 Bezier´ can be different in the domains of u, v, f and a . 279 tion processes. So, usually only a small central part of the For example, in a very high-end camera, if the vignetting is 226 image is considered to avoid whatever vignetting effect is 280 very limited, it may suffice to have Nu = Nv = 3, but it 227 present. To avoid this inefficiency in making use of the 281 228 may change over the aperture setting to result in Na = 4. 282 whole resolution of the camera, more often the camera is Our method involves three steps: (a) First, the device 229 set to low aperture (f/16 or lower) [4] where the vignetting 283 230 settings space spanned by f and a is sparsely sampled.