<<

IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, VOL. 1, NO. 4, DECEMBER 2015 259 Field Curvature Correction Using Focal Sweep Shigehiko Matsunaga and Shree K. Nayar

Abstract—Camera has become increasingly In this paper, we show that focal stack and focal sweep imag- complex to keep pace with the demand for ever increasing imag- ing can be utilized to reduce the complexity of a . Focal ing resolution. This complexity results in several disadvantages, stack and focal sweep are not new concepts [2]Ð[5] - they are including significant increase in size, cost, and weight of the cam- era. In this paper, we propose methods to simplify the lens of an well-known approaches for extending the depth of field (EDOF) imaging system without sacrificing resolution. Our approach is to of an imaging system and for computing the 3D structure of use either the focal stack or the focal sweep method to emulate a scene. Our approach is to use these techniques to correct a curved image sensor. This enables us to design a lens without field curvature instead of using a compound lens. Designing a the need to correct for field curvature. The end result is a lens with lens system without correction of field curvature significantly significantly lower complexity than a lens that is corrected for field curvature. We have verified the performance and practical viabil- reduces the constraints that need to be invoked during the ity of our approach using both simulations and experiments with design process. Our approach is based on the well-known fact prototype focal stack and focal sweep cameras. that high image quality can be obtained on a curved image sur- face with a smaller number of . The curved image surface Index Terms—Depth of field, field curvature, focal sweep, imaging optics, programmable pixel exposure, and computational is emulated either by capturing a set of images corresponding to imaging. different sensor locations (focal stack) or by capturing a single image with coded exposure while the image sensor is in contin- I. INTRODUCTION uous motion (focal sweep). In addition, color is also ONSUMER cameras currently produce photographs with reduced by selecting the best focused image surface for each of C tens of millions of pixels. The demand for higher resolu- the color channels. tion cameras continues in response to the needs of applications This paper is organized as follows. First, we describe related in fields as diverse as , medical care, automobile work in section 2. Next, in section 3, we discuss ways to design safety, security, factory automation, robotics, etc. The field a lens system without correction of field curvature, and describe of camera optics development has correspondingly become the advantages of using a curved image surface. In section 4, increasingly complex as it tries to keep pace with the ever we propose an algorithm to generate high resolution images by increasing resolution of image sensors. In optics design today, using focal stack and focal sweep to correct field curvature. We minimizing optical aberrations is the key to achieving high have built a focal stack camera and a focal sweep camera using performance. off-the-shelf components to demonstrate our approach. Finally, Optical aberrations are classified as either monochromatic experimental results that demonstrate the practical feasibility of aberration or [1]. Monochromatic aber- our approach are shown in section 5. ration consists of five types: , , astig- matism, field curvature, and . These five aberrations are also known as Seidel’s aberrations. Chromatic aberration II. RELATED WORK includes two components: longitudinal chromatic aberration and lateral chromatic aberration. Minimizing all seven of the A. Array System and Curved Sensor above aberrations can be achieved by combining several lenses. High resolution images can be captured using a camera However, using multiple lenses comes with several disad- array constructed by tiling a large number of cameras [6], vantages, including a significant increase in the size, cost [7]. However, such a system is inherently expensive and large. and weight of the camera. For these reasons, it would be Another approach is to use a multiscale optical system [8], highly beneficial if we could find ways to simplify the lens of which consists of a large scale objective lens and a small scale an imaging system without sacrificing resolution. non-uniform lens array used to correct aberrations caused by the objective lens. Although the size of a multiscale system can Manuscript received January 18, 2015; revised June 29, 2015; accepted September 29, 2015. Date of publication October 14, 2015; date of current ver- be compact, the individual elements of lens array have different sion December 08, 2015. This work was conducted while S. Matsunaga was optical aberrations. Such a system can be difficult to manufac- a Visiting Scholar at the Vision Laboratory at Columbia University. ture and align with respect to the image sensor. Instead of using The associate editor coordinating the review of this manuscript and approving the lens array, Ford et al. proposed the use of wave guide cou- it for publication was Prof. Sabine Susstrunk. S. Matsunaga is with Sony Corporation, Tokyo 108-0075, Japan (e-mail: pling of a curved image surface to a conventional (flat) sensor [email protected]). [13]. This solution also must overcome challenges related to S. K. Nayar is with the Department of Computer Science, Columbia alignment and mass manufacturing. University, New York, NY 10027 USA (e-mail: [email protected]). The use of a curved image sensor is one way to correct Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. field curvature and obtain high resolution images. Rim et al. Digital Object Identifier 10.1109/TCI.2015.2491181 showed that a curved image sensor can decrease the number

2333-9403 © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. 260 IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, VOL. 1, NO. 4, DECEMBER 2015 of optical elements of a lens without reducing image quality [9]. Cossairt and Nayar proposed a gigapixel imaging sys- tem consisting of a ball lens shared by several small planar sensors emulating a curved image surface to acquire high res- olution images [10]. Several research groups have attempted to fabricate curved sensors [11], [12]. However, the technol- ogy required to mass manufacture curved sensors, especially high resolution ones with high curvature, is not available at this time. In addition, when such technology becomes available, it is expected to be significantly more expensive than the technology used to manufacture today’s flat sensors. Fig. 1. Field curvature. A fronto-parallel plane is focused on a curved image surface.

B. Focal Stack and Focal Sweep The combined use of optics and signal processing has been aberrations are classified into seven types. In conventional lens shown to produce various benefits in imaging. Dowski proposed design, all these aberrations are simultaneously minimized by a hybrid optical-signal processing system that uses a phase plate combining several lenses. In this approach, however, as the to extend depth of field (DOF) [14]. Robinson showed that required resolution of a camera increases, lens design becomes spherical aberration can be corrected by signal processing, thus more complex and difficult. In addition, as the required reso- achieving high contrast imaging and EDOF with a simple lens lution increases, so do the size, cost and weight of the lens. [15]. Cossairt observed that axial chromatic aberrations can be Therefore, it is highly desirable to find a way to simplify a lens exploited to extend DOF [16]. system without sacrificing its resolution. In order to simplify Other ways to capture an EDOF image are focal stack and the system, we on field curvature. focal sweep. Focal stack uses multiple images taken with differ- Field curvature is an that converts a fronto- ent focus settings. It has been used for 3D scene reconstruction parallel planar surface into a curved image surface, as shown in as well [2], [3]. Focal sweep involves the capture of a single Fig. 1. It is well known that if all other aberrations are absent, a image during a rapid sweep of the sensor, lens or object over flat image surface can be obtained by reducing the Petzval Sum a large range of depths. Hausler showed that the DOF of an [19]. If an optical-system contains several widely separated thin optical microscope can be extended by moving an object along elements, the Petzval Sum is given by: the optical axis during image exposure [4]. Nagahara extended k P = 1/nifi, (1) DOF by moving the sensor instead [5]. In our work, we use the i=1 focal stack and focal sweep methods not for extending DOF, but k f for correcting field curvature, thus enabling us to use a simple where is the number of elements, i is the of n lens system for high resolution imaging. each element, and i is the of each element. Recently, several works showed that nearly all optical aber- The above equation reveals that lenses with both positive and rations can be corrected by digital signal processing, and high negative focal lengths are necessary to reduce the Petzval Sum. resolution imaging can be obtained from a simple lens [17], This fact causes lens systems to quickly becomes complex. If [18]. However, this requires estimation of the point spread func- we can emulate a curved image surface, we do not need extra tion (PSF) which is most often dependent on the depth of lenses to correct field curvature. scene. Moreover, this approach of image enhancement via PSF deconvolution is prone to an increase in image noise. In our B. Triplet Lens Design: With and Without Field Curvature approach, we design the lens to minimize all optical aberra- Correction tions except for field curvature, which we remove by emulating a curved image sensor using focal stack or focal sweep. We In this section, we describe the difference between con- show that this approach achieves high image resolution and ventional lens design and lens design without correction of high signal-to-noise ratio (SNR) while significantly reducing field curvature by using the case of a triplet lens. Fig. 2 lens complexity. compares triplet lenses with and without correction of field curvature. All the triplet lenses have the same specifica- tion, Fno =2.0, focal length =17.6 mm, and image height = III. LENS DESIGN WITHOUT CORRECTION OF FIELD 3.6 mm for a 1/2.5 sensor. The lens prescription data is CURVATURE shown in Table I. The optical and their performances were generated using the CodeV optical design software [20]. A. Field Curvature We used a weighted sum of the response at the In an ideal lens, the from a point light source on the λ = 656.3 nm, 587.6 nm, 546.1 nm, 486.1 nm, and 435.8 nm, object side would converge at a single point on the image side. with relative weights of 1, 2, 4, 2, and 1, respectively. Unfortunately, such an ideal lens does not exist. The point light System (a) shows a triplet lens developed using conventional source is instead blurred by optical aberrations resulting from lens design. System (b) shows a triplet lens with a flat image the shape of the lens. As we described in section 1, optical surface, but without correction of field curvature. System (c) MATSUNAGA AND NAYAR: FIELD CURVATURE CORRECTION 261

Fig. 2. Comparison of triplet lens implementations. The left column (a, d, g, i) shows the properties of a conventional design triplet lens using a flat sensor. Center column (b, e, h, j) shows the properties of a triplet lens using a flat sensor without correction of field curvature. The right column (c, f, k) shows the properties of the same triplet lens as in the center column, but using a curved sensor instead of a flat one. Relaxing the constraint of field curvature correction and using a curved sensor makes it possible to obtain a high quality image with fewer lenses. shows the same lens system as in (b), but with a curved image astigmatism, are not reduced in exchange for field curvature surface. Fig. 2 (d)-(f) show the modulation transfer functions correction, the MTF for each image height is low [Fig. 2(d)]. (MTF) of three systems. The limit (black dotted On the other hand, Fig. 2(h) shows that other aberrations can line), MTF on the axis (red line), MTF for the image heights of be corrected by relaxing the constraint of field curvature cor- 1.8 mm (green line), and 3.6 mm (blue line) are shown. We also rection. It is clear that the off-axis performance is low because show the tangential MTF (solid line) and radial MTF (dashed of field curvature [Fig. 2(e), (j)]. However, the on-axis MTF is line). In addition, Fig. 2 (g)-(h) show longitudinal aberration much higher than in the case of system (a). Were the image plots and Fig. 2 (i)-(k) show spot diagrams. The different colors surface a curved one with a shape consistent with the field cur- of the plots correspond to the five different wavelengths used in vature [Fig. 2 (c)], the off-axis performance would be as high the simulation. as that of the on-axis one [Fig. 2(f), (k)]. Therefore, high per- First, we compare system (a) with (b). Fig. 2 (g), (i) show formance is easily obtained with a curved image surface. In that the triplet lens is not sufficient for correcting all the aberra- other words, for any desired performance target, a curved image tions. Since other aberrations, such as chromatic aberration and surface simplifies the lens. 262 IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, VOL. 1, NO. 4, DECEMBER 2015

TABLE I THE LENS PRESCRIPTION DATA

Fig. 3. Flow diagram of the proposed system. (a) Focal stack, (b) focal sweep, and (c) 3D profile of curved image surface (RGB) when chromatic aberration exists. Focal stack and focal sweep are utilized to emulate a curved sensor.

IV. METHOD FOR CAPTURING A HIGH off-the-shelf components. Fig. 3 illustrates the two approaches. RESOLUTION IMAGE In the case of focal stack [Fig. 3(a)], the images are captured A. Focal Stack and Focal Sweep for Correction of Field using a set of sensor positions that cover the range of the field Curvature curvature. If there are magnification changes and image shifts between images in the stack, they can be corrected by using Both focal stack and focal sweep can be used to emulate an off-line calibration of the system. Next, the circular focused a curved image sensor. Between the two, focal sweep would area (either a disc or a ring) in each image is clipped from it. be the preferred approach as it is fast and requires the cap- By combining the clipped images, we can obtain a single all- ture of a single image. However, it does require the use of an focused image. In the case of focal sweep [Fig. 3(b)], a single image sensor that has pixel- exposure control. Therefore, image is captured while the sensor sweeps the range of field in applications where speed of capture is not critical, focal stack curvature. The exposure timing of each pixel is synchronized would be a better choice as it can be easily implemented using with the sensor motion in order to extract the brightness value MATSUNAGA AND NAYAR: FIELD CURVATURE CORRECTION 263

Fig. 5. Focal sweep for covering the whole range of field curvature. The sweep Fig. 4. Optimizing the number of focal stack images. The circular area clipped range of each pixel includes the sensor position for which the image is best from each image is determined by the distance d. focused. that corresponds when the pixel passes through the curved image surface. Although the captured image is slightly blurred because of motion and the finite exposure time, it can be deblurred by deconvolution with the integrated point spread function (IPSF) (see [5]). Fig. 3(c) shows a 3D surface of a curved image surface for each of the three color channels of the image sensor. The three image surfaces do not coincide since the lens system has chromatic aberration. Although the best focused surface for any given image height is different for each color, our approach can remove chromatic aberration effects as well by clipping (in the case of a stack) and expos- ing (in the case of sweep) the appropriate pixels for each color. In short, by using focal stack or focal sweep, we are able to remove both field curvature and chromatic aberration by simul- taneously emulating curved image surfaces for each of the three Fig. 6. MTF calculated from IPSF. Sweep range of IP1 is half of that of IP2. Sweeping a longer range causes degradation of MTF. color channels of the image sensor. In the case of multispec- tral imaging, a larger number of curved image surfaces may be Y emulated, simultaneously. The boundary of the second image 2 is also calculated using the polynomial field curvature model. In this way, we determine the boundaries of images, and the total number of images N can be determined as 1+[D +2d]. B. Optimizing the Number of Focal Stack Images In order to minimize the total exposure time needed to cap- ture the stack images, we optimize the number of images. Our C. Integrated PSF for Focal Sweep approach is based on the degradation of MTF. This is necessary In the case of focal sweep, we need to use a finite exposure because a combined image would not be useful if the MTFs of time for each pixel to ensure that the captured image is bright the individual clipped areas are very different. First, we choose enough. This corresponds to a finite sweep range for each pixel, fs a specific spatial frequency that we would like to preserve in which affects the MTF of the Integrated PSF. Fig. 5 illustrates q the final image. We then set a limit to the permissible degra- how the whole range of the field curvature can be covered. dation of the MTF within the first clipped circular region (from The sweep range of each pixel is 2α and it includes the sensor Y image height 0 to 1). position (black curve) for which the image is best focused. Fig. 4 shows how to divide the whole range of field curvature Let P (x, y, z) denote the PSF when the object plane and the D . Let us suppose that the image sensor is placed at the posi- image height of sensor are fixed. The Integrated PSF (IPSF) is Z tion 1 where the MTF on the axis is maximized. Then, the given by MTF at the image height Y1 is lower than MTF on the axis. If  z +α the degradation of MTF is within q, Y1 becomes the boundary 0 IPSF = P (x, y, z) dz, of the circular region clipped from the first image. Then, the (2) z0−α distance d is calculated by using a polynomial approximation of the field curvature. The second image is captured when the where z0 is the sensor position for which the image is best image sensor is placed at the position Z2.Itis2d away from Z1. focused, and α is half of the sweep range along the z axis. 264 IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, VOL. 1, NO. 4, DECEMBER 2015

Fig. 7. The simulated IPSF for different settings. Compared with the IPSF of a conventional triplet lens, the IPSF of a triplet without correction of field curvature does not vary with the normalized image height and the object distance.

Fig. 8. Simulation results for field curvature correction using focal stack and focal sweep. (a) One-shot image using a conventional triplet lens, (b) one-shot image using a triplet lens without correction of field curvature, (c) focal stack image obtained by combining circular regions of the stack images, (d) focal sweep image emulated by combining multiple images, and (e) focal sweep image after deconvolution with the IPSF. Note that image quality in the corner areas is significantly improved using focal stack and focal sweep.

Fig. 6 shows the MTF calculated from the IPSF of the triplet The above expression shows that as α increases, T decreases lens without correction of field curvature [Fig. 2(b)]. We used for any given ts. In short, there is a trade-off between noise in the Zemax optical design software to calculate the IPSF from the final image and the total exposure time used. the discrete PSF, instead of integration of the continuous PSF. Another feature of our approach is that the IPSF does Matlab was used to calculate the MTF from the IPSF [21], [22]. not vary significantly with image height and object distance. The MTF of IP1 is obtained using α =20um, while the MTF Fig. 7(a) shows the 1D profile of the IPSF calculated using the of IP2 is obtained using α =40um. Image height is on-axis conventional triplet design [Fig. 2(a)]. Fig. 7(b)-(d) show the in both cases. As can be seen, the MTF calculated from the 1D profiles of the IPSF calculated using the triplet lens without IPSF is affected by α. A large value of α (e.g., IP2) causes correction of field curvature [Fig. 2(b)], for different object dis- degradation of the MTF. These plots also indicate that, as α tances, when α =20um, and normalized image height Y varies increases, noise would be more highly amplified after decon- from 0 to 0.9. As seen in Fig. 7(a), the IPSF of the conventional volution. On the other hand, α also affects the total exposure triplet varies significantly with image height. In contrast, the time. When exposure time of each pixel while sweeping is ts, IPSF of the triplet without correction of field curvature does not the constant velocity of sweep is given by vs =2α/ts. Then, vary with respect to normalized image height and the object dis- the total exposure time T can be determined as: tance. Therefore, we are able to use a single IPSF for recovering the final high resolution image via deconvolution.

T =(D +2α) /vs,   D. Simulation Results D = +1 ∗ ts. (3) To verify our concept, we use the 2D image simulation tool, 2α CodeV [20]. We choose fs =80cycle/mm, and q =0.05 for MATSUNAGA AND NAYAR: FIELD CURVATURE CORRECTION 265

Fig. 9. Focal stack camera. The system includes a sensor whose position is Fig. 10. Lens Simplification. (a) A commercial lens (with five elements), and controlled by a translation stage. (b) a simplified lens (with two elements) that produces similar image quality when used with a focal stack or focal sweep. our simulations. This gives us the following parameter values: D = 332 um, d =4um, and N =42. Fig. 8 shows a compari- Fno =5(Edmund Optics, #49-658). Fig. 10 shows a side-by- son between a single image produced by a conventional triplet side comparison between the two lenses. The commercial lens (a), a single image produced by a triplet without correction of uses five elements to correct all the aberrations including field field curvature (b), and a combined image which is computed curvature, while the consists of only two elements, from a focal stack (c). The size of each Kanji character in the which are insufficient to correct for all the aberrations. Thus images is about 44 × 44 pixels and the size of the images is field curvature is not corrected. 3840 × 2160 pixels. As expected, the combined image looks Fig. 11 shows a comparison between images captured using sharper than the other two, especially in the corners. Although the two lenses. Due to the slight difference in the fields of view the system uses a with only three elements, its of the two lenses, we use posters of different sizes as targets quality is equivalent to that of a 4K resolution camera lens. for capturing the images. Fig. 11(a) shows an image captured For our focal sweep simulation, we simulated a large number by the commercial lens with an exposure time of 25 ms. The of images corresponding to different sensor positions and used resolution in the corners is high because the field curvature is them to obtain the focal sweep image. Fig. 8(d) shows the focal well-corrected by the five elements of the lens. Fig. 11(b) shows sweep image consisting of 42 circular sections. Each circular a one-shot image focused at the center with an exposure time of section is computed by averaging the focused image and four of 20 ms. The corner of the image is severely blurred due to field its neighbor images within the range of ±α, when α =20um. curvature. Fig. 11(c) shows a focal stack image consisting of 30 Although it is a little blurred compared with (c), it still looks circular images regions where the regions are calculated using sharper than (a), and (b). Moreover, the focal sweep image can simulation with fs =60cycle/mm, q =0.05, D = 468 um, be easily deblurred by deconvolution. A number of techniques and d =16um. The total exposure time was 600 ms for mea- have been proposed for deconvolution [23]. We used Wiener suring the 30 circular images. The image quality in the corners deconvolution with the single IPSF shown in Fig. 7(b), where is significantly improved compared with (b). Y =0, to obtain the result shown in (e). While capturing a focal stack, the motion of the sensor causes changes in magnification and introduces shifts in the stack of V. EXPERIMENTS images. These effects are measured once with a calibration chart and then corrected when each subsequent stack is cap- A. Focal Stack tured. In addition, field curvature in an actual lens varies from We first built the focal stack camera shown in Fig. 9. It simulated data. In order to select the best-focused pixels from consists of a lens, an image sensor that has a rolling shut- stack images, we apply the algorithm for determining depth ter (Lumenera, Lw570c, 1/2.5 CMOS sensor with 2592 × from focus shown in Fig. 12. As our first test target, we use 1944 pixels), and a translation stage (Physik Instrumente, a textured Kanji chart. After capturing a focal stack, magnifica- M-111.1DG). This setting is almost the same as the one used tion change and image shift are corrected using the calibration in previous work on extended depth of field [5]. We used a data. Then, we apply a modified Laplacian filter to the stack compact commercial lens with 16 mm focal length and Fno = images in order to obtain a focus index map, which represents 5.6 (Edmund Optics, #56-528). For the simplified lens, we the best-focused image number for each pixel [3]. Since the used an aspherized doublet lens with 14 mm focal length and focus index map is not smooth, a polynomial is fitted to it to 266 IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, VOL. 1, NO. 4, DECEMBER 2015

Fig. 11. Comparison of images captured using a commercial lens and a doublet. (a) One-shot image taken using a commercial lens, (b) one-shot image taken using the doublet without correction of field curvature, (c) focal stack image consisting of circular image regions using simulated lens data, (d) focal stack image consisting of best-focused pixels computed using a depth of focus algorithm. The focal stack image has almost the same image quality as the one-shot image captured by the commercial lens. Moreover, selecting best-focused pixels from the focal stack results in higher image quality. make the map smooth. By selecting the best focused pixels from the system. We used an aspherized doublet lens with 14 mm stack images using this smoothed index map, the image qual- focal length and Fno =5as the objective lens (Edmund Optics, ity in the corners is further improved as seen in Fig. 11(d). In #49-658), two plano-convex lenses with 30 mm focal length as our experiments, we used only a single index map correspond- the field lens (Edmund Optics, #45-363), an achromatic dou- ing to the green channel of color image. If a lens has noticeable blet lens with 100 mm focal length as the relay lens (Edmund chromatic aberrations, an index map can be used for each of the Optics, #32-327), a polarizing cube beam splitter (Edmund three color channels to correct for the chromatic aberrations, as Optics, #49-002), an LCoS device (Forth Dimension Displays, shown in Fig. 12. SXGA-3DA, resolution 1280 × 1024), an image sensor with a global shutter (Lumenera, Lw230c, 1/1.8 CCD with 1616 × 1216 pixels), and a translation stage (Physik Instrumente, B. Focal Sweep M-111.1DG). Captured images are of size 1280 × 960 pixels. In our context, the use of focal sweep requires the use of an As shown in Fig. 13(a), incident light from the scene is image sensor with per-pixel exposure timing control. Such a focused on the virtual image plane by the objective lens. The sensor is not available at this point in time. Therefore, we emu- field lens is used to prevent vignetting. The beam splitter sep- late it by using a liquid crystal on silicon (LCoS) device. LCoS arates the light into S-polarized light and P-polarized light has been used as a spatial light modulator in various imag- by reflection and transmission, respectively. The transmitted ing applications [24], [25]. Our focal sweep camera design is P-polarized light is reflected by the LCoS device. The LCoS similar to the one used in these previous works. device can rotate the polarization direction at each pixel based Fig. 13 shows our focal sweep camera system. It consists on the binary data applied to it. If the pixel value is 1, the of an objective lens, a field lens, three relay lenses, a polar- P-polarized light is converted into S-polarized light. S-polarized izing beam splitter, an LCoS device, an image sensor, and a light is again reflected by the beam splitter, and reaches the translation stage. Only off-the-shelf components are used in image sensor. Conversely, if the pixel value is 0, P-polarized MATSUNAGA AND NAYAR: FIELD CURVATURE CORRECTION 267

Fig. 12. Algorithm for depth of focus. Best focused pixels are selected using a focus index map obtained by applying a focus operator to the focal stack.

Fig. 13. Focal sweep camera. (a) Optical diagram of the focal sweep camera, and (b) the implemented focal sweep camera system. The focal sweep camera emulates per-pixel exposure by using a liquid crystal on silicon (LCoS) device. light is reflected with the same polarization direction. Then, it same binary image for more than 20 ms, and it also needs to is blocked by the beam splitter. In this way, we can emulate display the complementary binary image for the same period per-pixel exposure control. The camera, LCoS, and transla- immediately after. Due to this limitation, we captured our set tion stage are synchronized by using a trigger signal from a of circular images by using a sensor sweep for each one of the function generator. Our system is limited in terms of exposure images. The final focal sweep image is thus a combination of time owing to the use of the LCoS device. It cannot display the these circular focal sweep images, and is equivalent to a single 268 IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, VOL. 1, NO. 4, DECEMBER 2015

Fig. 14. Examples of focal sweep images. (a), (d) One-shot image captured using a doublet with the focus maximized for the center of the image. (b), (e) Focal sweep image obtained by combining 13 circular images. (c), (f) Focal sweep image after applying deconvolution. Resolution in the corner regions is significantly improved by our approach. focal sweep image captured using a single sweep using a system Note that the images have ring-shaped artifacts. This is the without the above timing constraints. result of light leakage caused by the use of the LCoS. This arti- Our focal sweep images are 1280 × 960 pixels in size. fact would not appear when a sensor with per-pixel exposure Fig. 14(a), (d) show one-shot images focused at the center with control is used. an exposure time of 16 ms. Clearly, the surrounding area of the image is strongly blurred due to field curvature. Fig. 14(b), (e) show a focal sweep image consisting of 13 circular sec- VI. CONCLUSION tions. Each circular section is captured using a sweep range of In this paper, we showed that by relaxing the constraint of 24 um with an exposure time of 16 ms, resulting in a total expo- correcting field curvature and using a curved image sensor, we sure time of 208 ms. The translation stage moves at a speed can significantly simplify the lens system without compromis- of 1.5 mm/s during the exposure time. As expected, the reso- ing image resolution. We proposed the use of focal stack and lution away from the center is significantly improved for both focal sweep to emulate a curved image sensor. In addition, we scenes. Though the center area is slightly blurred due to the implemented focal stack and focal sweep cameras using off- sweep, it can be deblurred by using deconvolution. Fig. 14(c), the-shelf components, and demonstrated the effectiveness of (f) show the result of deblurring using Wiener deconvolution our approach via experiments. We showed that by using the with a single IPSF that is optimized for the center of the image. focal stack method a doublet lens having field curvature can MATSUNAGA AND NAYAR: FIELD CURVATURE CORRECTION 269 capture an image with the same resolution as a commercial lens [17] C. Schuler, M. Hirsch, S. Harmeling, and B. Schölkopf, “Non-stationary consisting of five elements. We emulated a curved image sensor correction of optical aberrations,” in Proc. IEEE Conf. Comput. Vis., 2011, pp. 659Ð666. using focal sweep by using an LCoS device that enables per- [18] F. Heide, M. Rouf, and M. Hullin, “High-quality computational imag- pixel exposure control. With this system we showed that a high ing through simple lenses,” ACM Trans. Graph. (TOG), vol. 32, no. 5, resolution image can be acquired with a simple lens system. article 149, 2013 [Online]. Available: http://dl.acm.org/citation.cfm?id= 2516974 While our proposed approach can be used in various sce- [19] R. Kingslake and R. Johnson, Lens Design Fundamentals, 2nd ed. narios, it is not suitable for imaging fast moving objects. This Bellingham, WA, USA: SPIE, 2009. limitation is akin to the one faced by rolling shutter sensors. In [20] MathWorks, Inc. CodeV Optical Design Software [Online]. Available: http://www.mathworks.com/ our case, the pixels have different exposure timings based on [21] Zemax, LLC.Zemax Optical Design Software [Online]. Available: their radial distance from the image center. https://www.zemax.com/home [22] Synopsys, Inc.Maltab [Online]. Available: http://optics.synopsys.com/ [23] P. Jansson, Deconvolution of Images and Spectra. New York, NY, USA: Academic, 1996. REFERENCES [24] H. Nagahara, C. Zhou, T. Watanabe, H. Ishiguro, and S. Nayar, “Programmable camera using LCoS,” in Proc. 11th Eur. Conf. [1] W. Smith, Modern Optical : The Design of Optical Systems, Comput. Vis., 2010, pp. 337Ð350. 4th ed. New York, NY, USA: McGraw-Hill, 2008. [2] T. Darrel and K. Wohn, “Pyramid based depth from focus,” in Proc. IEEE [25] Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. Nayar, “Video from a single coded exposure photograph using a learned over-complete Conf. Comput. Vis. Pattern Recog., 1988, pp. 504Ð509. dictionary,” in Proc. Int. Conf. Comput. Vis., 2011, pp. 287Ð294. [3] S. Nayar and Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Mach. Intell., vol. 16 no. 8, pp. 824Ð831, 1994. [4] G. Häusler, “A method to increase the depth of focus by two step image Shigehiko Matsunaga received the B.Eng. and processing,” Opt. Commun., vol. 6, no. 1, pp. 38Ð42, 1972. M.Eng. degrees in electrical engineering from Osaka [5] H. Nagahara, S. Kuthirummal, C. Zhou, and S. Nayar, “Flexible University, Osaka, Japan, in 2003 and 2005, respec- depth of field photography,” in Proc. Eur. Conf. Comput. Vis., 2008, tively. He has been working with Sony Corporation, pp. 60Ð73. Tokyo, Japan, since 2005. He was a Visiting Scholar [6] B. Wilburn et al., “High performance imaging using large camera arrays,” at Columbia University from 2013 to 2014. His ACM Trans. Graph. (TOG), vol. 24, no. 3, pp. 765Ð776, 2005. research interests include optical lens design and [7] Y. Nomura, L. Zhang, and S. Nayar, “Scene collages and flexible computational imaging. camera arrays,” in Proc. 18th Eurograph. Conf. Render. Techn., 2007, pp. 127Ð138. [8] D. Brady and N. Hagan, “Multiscale lens design,” Opt. Express, vol. 17, no. 13, pp. 10659Ð10674, 2009. [9] S. Rim, P. Catrysee, R. Dinyari, K. Huang, and P. Peumans, “The optical advantages of curved focal plane arrays,” Opt. Express, vol. 16, no. 7, Shree K. Nayar received the Ph.D. degree in elec- pp. 4965Ð4971, 2008. trical and computer engineering from the Robotics [10] O. Cossairt, D. Miau, and S. Nayar, “Gigapixel computational imaging,” Institute, Carnegie Mellon University, Pittsburgh, PA, in Proc. IEEE Conf. Comput. Photogr., 2011, pp. 1Ð8. USA. He is the T. C. Chang Professor of Computer [11] R. Dinyari, S. Rim, K. Huang, P. Catrysee, and P. Peumans, “Curving Science with Columbia University, New York, NY, monolithic silicon for nonplanar focal plane array applications,” Appl. USA. He is the Head of the Columbia Vision Phys. Lett., vol. 92, p. 091114, 2008. Laboratory (CAVE), which develops advanced com- [12] H. Ko et al., “A hemispherical electronic eye camera based on compress- puter vision systems. His research is focused on ible silicon optoelectronics,” Nature, vol. 454, pp. 748Ð753, 2008. three areas—the creation of novel cameras that pro- [13] J. Ford et al., “Fiber-coupled monocentric lens imaging,” in Imaging and vide new forms of visual information, the design of Applied Optics OSA Technical Digest, Washington, DC, USA: Optical physics based models for vision and graphics, and Society of America, 2013, paper CW4C.2 [Online]. Available: https:// the development of algorithms for understanding scenes from images. His www.osapublishing.org/abstract.cfm?uri=COSI-2013-CW4C.2 research interests include applications in the fields of digital imaging, computer [14] E. Dowski Jr. and W. Cathey, “Extended depth of field through wave-front graphics, and robotics and human-computer interfaces. For his contributions to coding,” Appl. Opt., vol. 34, no. 11, pp. 1859Ð1866, 1995. computer vision and computational imaging, he was elected to the National [15] M. Robinson, G. Feng, and D. Stork, “Spherical coded imagers: Academy of Engineering in 2008, the American Academy of Arts and Sciences Improving lens speed, depth-of-field, and manufacturing yield through in 2011, and the National Academy of Inventors in 2014. He was the recipi- enhanced spherical aberration and compensating image processing,” ent of the David Marr Prize (1990 and 1995), the David and Lucile Packard Proc. SPIE, vol. 7429, p. 74290M, 2009. Fellowship (1992), the National Young Investigator Award (1993), the NTT [16] O. Cossairt and S. Nayar, “Spectral focal sweep: Extended depth of Distinguished Scientific Achievement Award (1994), the Keck Foundation field from chromatic aberrations,” in Proc. IEEE Conf. Comput. Photogr., Award for Excellence in Teaching (1995), the Columbia Great Teacher Award 2010, pp. 1Ð8. (2006), and the Carnegie Mellon Alumni Achievement Award (2009).