Super-Resolving Light Fields in Microscopy: Depth from Disparity
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Modeling of Microlens Arrays with Different Lens Shapes Abstract
Modeling of Microlens Arrays with Different Lens Shapes Abstract Microlens arrays are found useful in many applications, such as imaging, wavefront sensing, light homogenizing, and so on. Due to different fabrication techniques / processes, the microlenses may appear in different shapes. In this example, microlens array with two typical lens shapes – square and round – are modeled. Because of the different apertures shapes, the focal spots are also different due to diffraction. The change of the focal spots distribution with respect to the imposed aberration in the input field is demonstrated. 2 www.LightTrans.com Modeling Task 1.24mm 4.356mm 150µm input field - wavelength 633nm How to calculate field on focal - diameter 1.5mm - uniform amplitude plane behind different types of - phase distributions microlens arrays, and how 1) no aberration does the spot distribution 2) spherical aberration ? 3) coma aberration change with the input field 4) trefoil aberration aberration? x z or x z y 3 Results x square microlens array wavefront error [휆] z round microlens array [mm] [mm] y y x [mm] x [mm] no aberration x [mm] (color saturation at 1/3 maximum) or diffraction due to square aperture diffraction due to round aperture 4 Results x square microlens array wavefront error [휆] z round microlens array [mm] [mm] y y x [mm] x [mm] spherical aberration x [mm] Fully physical-optics simulation of system containing microlens array or takes less than 10 seconds. 5 Results x square microlens array wavefront error [휆] z round microlens array [mm] [mm] y y x [mm] x [mm] coma aberration x [mm] Focal spots distribution changes with respect to the or aberration of the input field. -
Multi-Perspective Stereoscopy from Light Fields
Multi-Perspective stereoscopy from light fields The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation Changil Kim, Alexander Hornung, Simon Heinzle, Wojciech Matusik, and Markus Gross. 2011. Multi-perspective stereoscopy from light fields. ACM Trans. Graph. 30, 6, Article 190 (December 2011), 10 pages. As Published http://dx.doi.org/10.1145/2070781.2024224 Publisher Association for Computing Machinery (ACM) Version Author's final manuscript Citable link http://hdl.handle.net/1721.1/73503 Terms of Use Creative Commons Attribution-Noncommercial-Share Alike 3.0 Detailed Terms http://creativecommons.org/licenses/by-nc-sa/3.0/ Multi-Perspective Stereoscopy from Light Fields Changil Kim1,2 Alexander Hornung2 Simon Heinzle2 Wojciech Matusik2,3 Markus Gross1,2 1ETH Zurich 2Disney Research Zurich 3MIT CSAIL v u s c Disney Enterprises, Inc. Input Images 3D Light Field Multi-perspective Cuts Stereoscopic Output Figure 1: We propose a framework for flexible stereoscopic disparity manipulation and content post-production. Our method computes multi-perspective stereoscopic output images from a 3D light field that satisfy arbitrary prescribed disparity constraints. We achieve this by computing piecewise continuous cuts (shown in red) through the light field that enable per-pixel disparity control. In this particular example we employed gradient domain processing to emphasize the depth of the airplane while suppressing disparities in the rest of the scene. Abstract tions of autostereoscopic and multi-view autostereoscopic displays even glasses-free solutions become available to the consumer. This paper addresses stereoscopic view generation from a light field. We present a framework that allows for the generation However, the task of creating convincing yet perceptually pleasing of stereoscopic image pairs with per-pixel control over disparity, stereoscopic content remains difficult. -
Design and Fabrication of Flexible Naked-Eye 3D Display Film Element Based on Microstructure
micromachines Article Design and Fabrication of Flexible Naked-Eye 3D Display Film Element Based on Microstructure Axiu Cao , Li Xue, Yingfei Pang, Liwei Liu, Hui Pang, Lifang Shi * and Qiling Deng Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China; [email protected] (A.C.); [email protected] (L.X.); [email protected] (Y.P.); [email protected] (L.L.); [email protected] (H.P.); [email protected] (Q.D.) * Correspondence: [email protected]; Tel.: +86-028-8510-1178 Received: 19 November 2019; Accepted: 7 December 2019; Published: 9 December 2019 Abstract: The naked-eye three-dimensional (3D) display technology without wearing equipment is an inevitable future development trend. In this paper, the design and fabrication of a flexible naked-eye 3D display film element based on a microstructure have been proposed to achieve a high-resolution 3D display effect. The film element consists of two sets of key microstructures, namely, a microimage array (MIA) and microlens array (MLA). By establishing the basic structural model, the matching relationship between the two groups of microstructures has been studied. Based on 3D graphics software, a 3D object information acquisition model has been proposed to achieve a high-resolution MIA from different viewpoints, recording without crosstalk. In addition, lithography technology has been used to realize the fabrications of the MLA and MIA. Based on nanoimprint technology, a complete integration technology on a flexible film substrate has been formed. Finally, a flexible 3D display film element has been fabricated, which has a light weight and can be curled. -
Microlens Array Grid Estimation, Light Field Decoding, and Calibration
1 Microlens array grid estimation, light field decoding, and calibration Maximilian Schambach and Fernando Puente Leon,´ Senior Member, IEEE Karlsruhe Institute of Technology, Institute of Industrial Information Technology Hertzstr. 16, 76187 Karlsruhe, Germany fschambach, [email protected] Abstract—We quantitatively investigate multiple algorithms lenslet images, while others perform calibration utilizing the for microlens array grid estimation for microlens array-based raw images directly [4]. In either case, this includes multiple light field cameras. Explicitly taking into account natural and non-trivial pre-processing steps, such as the detection of the mechanical vignetting effects, we propose a new method for microlens array grid estimation that outperforms the ones projected microlens (ML) centers and estimation of a regular previously discussed in the literature. To quantify the perfor- grid approximating the centers, alignment of the lenslet image mance of the algorithms, we propose an evaluation pipeline with the sensor, slicing the image into a light field and, in utilizing application-specific ray-traced white images with known the case of hexagonal MLAs, resampling the light field onto a microlens positions. Using a large dataset of synthesized white rectangular grid. These steps have a non-negligible impact on images, we thoroughly compare the performance of the different estimation algorithms. As an example, we apply our results to the quality of the decoded light field and camera calibration. the decoding and calibration of light fields taken with a Lytro Hence, a quantitative evaluation is necessary where possible. Illum camera. We observe that decoding as well as calibration Here, we will focus on the estimation of the MLA grid benefit from a more accurate, vignetting-aware grid estimation, parameters (to which we refer to as pre-calibration), which especially in peripheral subapertures of the light field. -
Scanning Confocal Microscopy with a Microlens Array
Scanning Confocal Microscopy with A Microlens Array Antony Orth*, and Kenneth Crozier y School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts 02138, USA Corresponding authors: * [email protected], y [email protected] Abstract: Scanning confocal fluorescence microscopy is performed with a refractive microlens array. We simul- taneously obtain an array of 3000 20µm x 20µm images with a lateral resolution of 645nm and observe low power optical sectioning. OCIS codes: (180.0180) Microscopy; (180.1790) Confocal microscopy; (350.3950) Micro-optics High throughput fluorescence imaging of cells and tissues is an indispensable tool for biological research[1]. Even low-resolution imaging of fluorescently labelled cells can yield important information that is not resolvable in traditional flow cytometry[2]. Commercial systems typically raster scan a well plate under a microscope objective to generate a large field of view (FOV). In practice, the process of scanning and refocusing limits the speed of this approach to one FOV per second[3]. We demonstrate an imaging modality that has the potential to speed up fluorescent imaging over a large field of view, while taking advantage of the background rejection inherent in confocal imaging. Figure 1: a) Experimental setup. Inset: Microscope photograph of an 8x8 sub-section of the microlens array. Scale bar is 80µm. b) A subset of the 3000, 20µm x 20µm fields of view, each acquired by a separate microlens. Inset: Zoom-in of one field of view showing a pile of 2µm beads. The experimental geometry is shown in Figure 1. A collimated laser beam (5mW output power, λex =532nm) is focused into a focal spot array on the fluorescent sample by a refractive microlens array. -
Focus Cue Enabled Head-Mounted Display Via Microlens Array
Focus Cue Enabled Head-Mounted Display via Microlens Array Ryan Burke Leandra Brickson Stanford University Stanford University Electrical Engineering Department Electrical Engineering Department [email protected] [email protected] high field of view scene while still supporting focus cues. Abstract While these systems are good solutions for the A new head mounted display (HMD) is designed utilizing a vergence-accommodation conflict, they still are not as microlens array to resolve the vergence-accommodation accessible to the public as google cardboard. Our project conflict common in current HMDs. The hardware and investigates adding focus cues to the google cardboard software are examined via simulation to optimize a depth system via microlens array, eliminating the vergence and resolution tradeoff in this system to give the best -accommodation conflict. overall user experience. Ray tracing was used to design the optics so that all viewing zones were mapped to the eye to 2. Optical System Design provide focus cues. A program was also written to convert Lytro stereo pairs to a light field useable by the system. The proposed optical design of the focus cue enabled head mounted system is shown in figure 1. In this, the 1. Introduction google cardboard system is augmented by placing a microlens array (MLA) in front of the display to create a There has recently been a lot of research going into light field, which will then be projected to the virtual image developing consumer virtual reality (VR) systems aiming plane by the objective lens. In this system, the MLA will to give the user an immersive 3D experience through the create different viewing zones, which will provide depth use of head mounted displays. -
Light Field Saliency Detection with Deep Convolutional Networks Jun Zhang, Yamei Liu, Shengping Zhang, Ronald Poppe, and Meng Wang
1 Light Field Saliency Detection with Deep Convolutional Networks Jun Zhang, Yamei Liu, Shengping Zhang, Ronald Poppe, and Meng Wang Abstract—Light field imaging presents an attractive alternative � � � �, �, �∗, �∗ … to RGB imaging because of the recording of the direction of the � incoming light. The detection of salient regions in a light field �� �, �, �1, �1 �� �, �, �1, �540 … image benefits from the additional modeling of angular patterns. … For RGB imaging, methods using CNNs have achieved excellent � � results on a range of tasks, including saliency detection. However, �� �, �, ��, �� Micro-lens … it is not trivial to use CNN-based methods for saliency detection Main lens Micro-lens Photosensor image on light field images because these methods are not specifically array �� �, �, �375, �1 �� �, �, �375, �540 (a) (b) designed for processing light field inputs. In addition, current � � ∗ ∗ light field datasets are not sufficiently large to train CNNs. To �� � , � , �, � … overcome these issues, we present a new Lytro Illum dataset, �� �−4, �−4, �, � �� �−4, �4, �, � … which contains 640 light fields and their corresponding ground- … truth saliency maps. Compared to current light field saliency � � � � , � , �, � datasets [1], [2], our new dataset is larger, of higher quality, � 0 0 … contains more variation and more types of light field inputs. Sub-aperture Main lens Micro-lens Photosensor image This makes our dataset suitable for training deeper networks array �� �4, �−4, �, � �� �4, �4, �, � and benchmarking. Furthermore, we propose a novel end-to- (c) (d) end CNN-based framework for light field saliency detection. Specifically, we propose three novel MAC (Model Angular Fig. 1. Illustrations of light field representations. (a) Micro-lens image Changes) blocks to process light field micro-lens images. -
Microlenses for Stereoscopic Image Formation
MICROLENSES FOR STEREOSCOPIC IMAGE FORMATION R. P. Rocha, J. P. Carmo and J. H. Correia Dept. Industrial Electronics, University of Minho, Campus Azurem, 4800-058 Guimaraes, Portugal Keywords: Microlenses, Optical filters, RGB, Image sensor, Stereoscopic vision, Low-cost fabrication. Abstract: This paper presents microlenses for integration on a stereoscopic image sensor in CMOS technology for use in biomedical devices. It is intended to provide an image sensor with a stereoscopic vision. An array of microlenses potentiates stereoscopic vision and maximizes the color fidelity. An array of optical filters tuned at the primary colors will enable a multicolor usage. The material selected for fabricating the microlens was the AZ4562 positive photoresist. The reflow method applied to the photoresist allowing the fabrication of microlenses with high reproducibility. 1 INTRODUCTION stereopsis). This means that bad quality stereoscopy induces perceptual ambiguity in the viewer (Zeki, Currently, the available image sensing technology is 2004). The reason for this phenomenon is that the not yet ready for stereoscopic acquisition. The final human brain is simultaneously more sensitive but quality of the image will be improved because of the less tolerant to corrupt stereo images as well as stereoscopic vision but also due to the system’s high vertical shifts of both images, being more tolerant to resolution. Typically, two cameras are used to monoscopic images. Therefore, the brain does not achieve a two points of view (POV) perspective consent the differences between the images coming effect. But this solution presents some problems from the left and right channels that are originated mainly because the two POVs, being sufficiently from the two independent and optically unadjusted different, cause the induction of psycho-visual cameras. -
Convolutional Networks for Shape from Light Field
Convolutional Networks for Shape from Light Field Stefan Heber1 Thomas Pock1,2 1Graz University of Technology 2Austrian Institute of Technology [email protected] [email protected] Abstract x η x η y y y y Convolutional Neural Networks (CNNs) have recently been successfully applied to various Computer Vision (CV) applications. In this paper we utilize CNNs to predict depth information for given Light Field (LF) data. The proposed method learns an end-to-end mapping between the 4D light x x field and a representation of the corresponding 4D depth ξ ξ field in terms of 2D hyperplane orientations. The obtained (a) LF (b) depth field prediction is then further refined in a post processing step by applying a higher-order regularization. Figure 1. Illustration of LF data. (a) shows a sub-aperture im- Existing LF datasets are not sufficient for the purpose of age with vertical and horizontal EPIs. The EPIs correspond to the training scheme tackled in this paper. This is mainly due the positions indicated with dashed lines in the sub-aperture im- to the fact that the ground truth depth of existing datasets is age. (b) shows the corresponding depth field, where red regions inaccurate and/or the datasets are limited to a small num- are close to the camera and blue regions are further away. In the EPIs a set of 2D hyperplanes is indicated with yellow lines, where ber of LFs. This made it necessary to generate a new syn- corresponding scene points are highlighted with the same color in thetic LF dataset, which is based on the raytracing software the sub-aperture representation in (b). -
Head Mounted Display Optics II
Head Mounted Display Optics II Gordon Wetzstein Stanford University EE 267 Virtual Reality Lecture 8 stanford.edu/class/ee267/ Lecture Overview • focus cues & the vergence-accommodation conflict • advanced optics for VR with focus cues: • gaze-contingent varifocal displays • volumetric and multi-plane displays • near-eye light field displays • Maxwellian-type displays • AR displays Magnified Display • big challenge: virtual image appears at fixed focal plane! d • no focus cues d’ f 1 1 1 + = d d ' f Importance of Focus Cues Decreases with Age - Presbyopia 0D (∞cm) 4D (25cm) 8D (12.5cm) 12D (8cm) Nearest focus distance focus Nearest 16D (6cm) 8 16 24 32 40 48 56 64 72 Age (years) Duane, 1912 Cutting & Vishton, 1995 Relative Importance of Depth Cues The Vergence-Accommodation Conflict (VAC) Real World: Vergence & Accommodation Match! Current VR Displays: Vergence & Accommodation Mismatch Accommodation and Retinal Blur Blur Gradient Driven Accommodation Blur Gradient Driven Accommodation Blur Gradient Driven Accommodation Blur Gradient Driven Accommodation Blur Gradient Driven Accommodation Blur Gradient Driven Accommodation Top View Real World: Vergence & Accommodation Match! Top View Screen Stereo Displays Today (including HMDs): Vergence-Accommodation Mismatch! Consequences of Vergence-Accommodation Conflict • Visual discomfort (eye tiredness & eyestrain) after ~20 minutes of stereoscopic depth judgments (Hoffman et al. 2008; Shibata et al. 2011) • Degrades visual performance in terms of reaction times and acuity for stereoscopic vision -
0.8 Um Color Pixels with Wave-Guiding Structures for Low Optical Crosstalk Image Sensors
https://doi.org/10.2352/ISSN.2470-1173.2021.7.ISS-093 © 2021, Society for Imaging Science and Technology 0.8 um Color Pixels with Wave-Guiding Structures for Low Optical Crosstalk Image Sensors Yu-Chi Chang, Cheng-Hsuan Lin, Zong-Ru Tu, Jing-Hua Lee, Sheng Chuan Cheng, Ching-Chiang Wu, Ken Wu, H.J. Tsai VisEra Technologies Company, No12, Dusing Rd.1, Hsinchu Science Park, Taiwan Abstract development of pixel schemes. While electrical crosstalk can Low optical-crosstalk color pixel scheme with wave-guiding be largely decreased by the deep trench isolation (DTI), structures is demonstrated in a high resolution CMOS image optical crosstalk is becoming severe due to the wave nature of sensor with a 0.8um pixel pitch. The high and low refractive index light, and the effect of diffraction must be considered in the configuration provides a good confinement of light waves in array of pixels and microlenses [2-4], when their dimensions different color channels in a quad Bayer color filter array. The are comparable to the wavelength. Figure 3 shows pixel measurement result of this back-side illuminated (BSI) device schemes of Bayer CFA and quad-Bayer CFA with large exhibits a significant lower color crosstalk with enhanced SNR photodiodes and small photodiodes separately. The performance, while the better angular response and higher distribution of light field after the micro lenses is illustrated in angular selectivity of phase detection pixels also show the each scheme to show the optical crosstalk due to diffraction in suitability to a new generation of small pixels for CMOS image small pixels after the focusing of microlenses. -
Light Field Features in Scale and Depth
LiFF: Light Field Features in Scale and Depth Donald G. Dansereau1;2, Bernd Girod1, and Gordon Wetzstein1 1Stanford University, 2The University of Sydney Abstract—Feature detectors and descriptors are key low-level vision tools that many higher-level tasks build on. Unfortunately these fail in the presence of challenging light transport effects including partial occlusion, low contrast, and reflective or re- fractive surfaces. Building on spatio-angular imaging modalities offered by emerging light field cameras, we introduce a new and computationally efficient 4D light field feature detector and descriptor: LiFF. LiFF is scale invariant and utilizes the full 4D light field to detect features that are robust to changes in perspective. This is particularly useful for structure from motion (SfM) and other tasks that match features across viewpoints of a scene. We demonstrate significantly improved 3D reconstructions Fig. 1. (left) One of five views of a scene that COLMAP’s structure-from- via SfM when using LiFF instead of the leading 2D or 4D features, motion (SfM) solution fails to reconstruct using SIFT, but successfully and show that LiFF runs an order of magnitude faster than the reconstructs using LiFF; (right) LiFF features have well-defined scale leading 4D approach. Finally, LiFF inherently estimates depth and depth, measured as light field slope, revealing the 3D structure of for each feature, opening a path for future research in light the scene – note we do not employ depth in the SfM solution. Code field-based SfM. and dataset are at http://dgd.vision/Tools/LiFF, see the supplementary information for dataset details.