Multi-Aperture Telecentric Lenses Exhibit Varying Magnification for Objects Lens at Different Distances from the Lens

Total Page:16

File Type:pdf, Size:1020Kb

Multi-Aperture Telecentric Lenses Exhibit Varying Magnification for Objects Lens at Different Distances from the Lens space. Most imaging systems with conventional Multi-aperture telecentric lenses exhibit varying magnification for objects lens at different distances from the lens. This causes several problems for machine vision and other Bo Wu applications. (i) Objects closer to the lens appear Lei Ye to be larger than those farther from the lens; The Hong Kong Polytechnic University, China for example, in an image of a cylindrical pipe the top and bottom crown edges appear to be concentric even though the two circles are A multi-aperture telecentric lens is a single perfectly identical. (ii) Object shapes vary with telecentric lens with multiple aperture stops their distance from the center of the field of that enable it to capture multidirectional parallel view (FOV); for example, circles near the center light rays. Unlike conventional optical systems in of the FOV appear to be egg-shaped when photogrammetry or computer vision, which use moved toward the periphery. (iii) Some features a pair of stereo cameras for three-dimensional or objects may be hidden by objects closer to (3-D) reconstruction, a multi-aperture telecen- the lens. In contrast, a telecentric lens removes tric lens uses image(s) obtained from the light these perspective or parallax errors to produce rays separated by different aperture stops for 3-D an orthographic projection that provides the reconstruction. A multi-aperture telecentric lens same magnification at all distances. Pioneered by is ideal for use in a variety of machine vision Moore (1973), telecentric lenses have been pur- applications, such as noncontact measurement sued by researchers in various settings (Watanabe and inspection systems, 3-D reconstruction of and Nayar 1996; Bai and Sadoulet 2007). In close-range targets, and intelligent vision systems particular, telecentric lenses are commonly used in robotic equipment. in machine vision applications, where software The telecentric lens is discussed here firstly. analysis is simplified and more accurate due to Detailed configurations of the multi-aperture the reduction of parallax. Imaging systems with telecentric lens are then discussed. Key aspects of telecentric lenses have made it possible to reach using a multi-aperture telecentric lens for 3-D dimensional measurement accuracies that can reconstruction – including its geometric model be better than those generated by contact and and an evaluation of the potential accuracy – are laser-based methods (Djidel et al. 2006). then presented. There are three types of telecentric lens. The first is an object-space telecentric lens, in which Telecentric lens the aperture stop is placed at the front focal plane of the lens, resulting in an entrance pupil loca- tion at infinity. A shift in the object plane does A telecentric lens is a compound lens used in not affect image magnification. Such lenses are an imaging system to make objects appear to be used in machine vision systems because image the same size independent of their location in magnification is independent of the objects’ The International Encyclopedia of Geography. Edited by Douglas Richardson, Noel Castree, Michael F. Goodchild, Audrey Kobayashi, Weidong Liu, and Richard A. Marston. © 2017 John Wiley & Sons, Ltd. Published 2017 by John Wiley & Sons, Ltd. DOI: 10.1002/9781118786352.wbieg0106 MULTI-APERTURE TELECENTRIC LENS o Figure 1 An idealized double telecentric lens. distance or position in the field of view. The stop placed at the common focal plane of the second is an image-space telecentric lens, in front and rear elements such that it only passes which the aperture stop is placed at the rear the light rays that are parallel to the optical axis. focal plane of the lens, resulting in an exit pupil In summary, telecentric lenses have the fol- location at infinity. A shift in the image plane lowing advantages over conventional lenses: (i) does not affect image magnification. Such lenses constant magnification independent of shift in are used in image sensors that do not tolerate a object and/or image planes; (ii) low distortion, wide range of angles of incidence. The third is normally in the range of 0.1% for high-quality a double telecentric lens, in which the aperture telecentric lenses; (iii) reduction or elimination stop is placed at the common focal plane, result- of perspective error; (iv) increased image resolu- ing in both the entrance and exit pupils being tion; and (v) uniform image plane illumination. located at infinity. Shifting either the image However, they also have several disadvantages. or object planes does not affect magnification First, more optical elements are used than in given that double-telecentric systems are afocal. conventional lens systems due to the complex Double telecentric lenses have magnification design. Second, large aperture optical elements that is more precisely constant than those that in the region of telecentricity are required to are only object-space telecentric because the provide a nonvignetted FOV. The large aperture intersection position of the principal ray on and more optical elements lead to increases the detector does not change, which allows for in the cost and weight of the imaging system. the precise measurement of objects regardless Third, traditional telecentric lenses use fixed of their positions. Figure 1 illustrates an ide- focal lengths, which result in fixed FOVs. An alized double telecentric lens with two thin investigation of varying FOVs requires the use positive-powered elements, with the aperture of several fixed magnification lenses. Zinter 2 MULTI-APERTURE TELECENTRIC LENS and Sanson (2001) presented an endeavor to view between apertures. Image disparities from develop a telecentric zoom lens for this type of multiple perspectives in the focal plane facilitated task. Finally, it is not possible to obtain depth the derivation of 3-D information. To take information from the telecentric images due to advantage of the orthographic projection of the the orthographic projection of the telecentric telecentric lens and the image disparities from lens. Despite the disadvantages inherent in the multiple, multi-aperture telecentric lenses have telecentric lens design due to its increased com- been investigated for use in 3-D reconstruction plexity, the numerous benefits make telecentric in recent years (Kim and Kanade 2011). lenses a popular choice in a variety of applica- A multi-aperture telecentric lens has mul- tions. The last drawback evokes the development tiple aperture stops rather than one, as in a of a multi-aperture telecentric lens. conventional telecentric lens. Figure 2 illustrates an idealized multi-aperture telecentric lens. There are two aperture stops O and O′ on Multi-aperture telecentric lens the focal plane, of which O is located at the focal point of the lens and the other O′ is at a In applications such as machine vision, the 2-D distance from the focal point. The aperture stop information and the 3-D depth information from O selectively passes light rays that are parallel to the scene must be extracted simultaneously.Stereo the optical axis because O is located at the focal vision using multiple cameras is the conventional point of the lens and only passes the rays. The ′ approach to infer depth information based on aperture stop O selectively passes light rays that are parallel to each other but not parallel to the parallax from multiple perspectives, which has ′ a long history in the fields of photogrammetry optical axis. Assuming O is an infinitely small aperture stop, the light rays selected by O′ are and computer vision. In recent years, a vari- ′ ety of techniques, such as motion parallax and parallel to the vector from the lens center to O depth-from-focus, have been implemented in (blue dashed line in Figure 2). 3-D imaging systems. However, these systems In a conventional stereo vision system, as illus- are relatively expensive and require complex trated in Figure 3, two cameras are used to form stereo images and the depth information of any camera calibration and geometric processing. As object in the scene can be derived from the dis- mentioned, the telecentric lens has the desirable parity of the same object imaged on the stereo property of orthographic projection, which images. From Figure 3, for an object point P, its makes it easier to measure or compare an object’s disparity is: physical length independently from its depth in relation to the camera. However, it is not possible f d = x1 + x2 = B (1) to obtain depth information from the image, as Z there is no foreshortening effect in telecentric where f is the focal length of the camera, B is images. The multiple aperture technique has been the baseline between the stereo cameras, and Z used in imaging systems to provide 3-D informa- is the depth of point P. In equation 1, both B tion. For example, Fife, Gamal, and Wong (2006) and f are fixed, and thus the disparity d is linearly presented an image sensor comprising an array proportional to the inverse of the depth. of apertures each with its own local integrated In a simplified multi-aperture telecentric optics and pixel array. A lens focused the image lens system (1× magnification), as illustrated in above the sensor, creating overlapping fields of Figure 4, the distance between the two aperture 3 MULTI-APERTURE TELECENTRIC LENS ( ) stops is B, which can be considered the baseline Z d = 4 − B (2) between the two aperture stops. The focal length f of the lens is f. The image plane is located at a distance f from the focal plane. A spatial auxiliary In equation 2, both the baseline length, B,and coordinate system is used with its origin at focal length, f, are fixed; thus the disparity, d, the center of the image plane. Its z-axis aligns is linearly proportional to the depth, Z,ofthe object point.
Recommended publications
  • Digital Light Field Photography
    DIGITAL LIGHT FIELD PHOTOGRAPHY a dissertation submitted to the department of computer science and the committee on graduate studies of stanford university in partial fulfillment of the requirements for the degree of doctor of philosophy Ren Ng July © Copyright by Ren Ng All Rights Reserved ii IcertifythatIhavereadthisdissertationandthat,inmyopinion,itisfully adequateinscopeandqualityasadissertationforthedegreeofDoctorof Philosophy. Patrick Hanrahan Principal Adviser IcertifythatIhavereadthisdissertationandthat,inmyopinion,itisfully adequateinscopeandqualityasadissertationforthedegreeofDoctorof Philosophy. Marc Levoy IcertifythatIhavereadthisdissertationandthat,inmyopinion,itisfully adequateinscopeandqualityasadissertationforthedegreeofDoctorof Philosophy. Mark Horowitz Approved for the University Committee on Graduate Studies. iii iv Acknowledgments I feel tremendously lucky to have had the opportunity to work with Pat Hanrahan, Marc Levoy and Mark Horowitz on the ideas in this dissertation, and I would like to thank them for their support. Pat instilled in me a love for simulating the flow of light, agreed to take me on as a graduate student, and encouraged me to immerse myself in something I had a passion for.Icouldnothaveaskedforafinermentor.MarcLevoyistheonewhooriginallydrewme to computer graphics, has worked side by side with me at the optical bench, and is vigorously carrying these ideas to new frontiers in light field microscopy. Mark Horowitz inspired me to assemble my camera by sharing his love for dismantling old things and building new ones. I have never met a professor more generous with his time and experience. I am grateful to Brian Wandell and Dwight Nishimura for serving on my orals commit- tee. Dwight has been an unfailing source of encouragement during my time at Stanford. I would like to acknowledge the fine work of the other individuals who have contributed to this camera research. Mathieu Brédif worked closely with me in developing the simulation system, and he implemented the original lens correction software.
    [Show full text]
  • Sensor Interpixel Correlation Analysis and Reduction for Color Filter Array High Dynamic Range Image Reconstruction
    Sensor interpixel correlation analysis and reduction for color filter array high dynamic range image reconstruction Mikael Lindstrand The self-archived postprint version of this journal article is available at Linköping University Institutional Repository (DiVA): http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-154156 N.B.: When citing this work, cite the original publication. Lindstrand, M., (2019), Sensor interpixel correlation analysis and reduction for color filter array high dynamic range image reconstruction, Color Research and Application, , 1-13. https://doi.org/10.1002/col.22343 Original publication available at: https://doi.org/10.1002/col.22343 Copyright: This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non- commercial and no modifications or adaptations are made. © 2019 The Authors. Color Research & Application published by Wiley Periodicals, Inc. http://eu.wiley.com/WileyCDA/ Received: 10 April 2018 Revised: 3 December 2018 Accepted: 4 December 2018 DOI: 10.1002/col.22343 RESEARCH ARTICLE Sensor interpixel correlation analysis and reduction for color filter array high dynamic range image reconstruction Mikael Lindstrand1,2 1gonioLabs AB, Stockholm, Sweden Abstract 2Image Reproduction and Graphics Design, Campus Norrköping, ITN, Linköping University, High dynamic range imaging (HDRI) by bracketing of low dynamic range (LDR) Linköping, Sweden images is demanding, as the sensor is deliberately operated at saturation. This exac- Correspondence erbates any crosstalk, interpixel capacitance, blooming and smear, all causing inter- gonioLabs AB, Stockholm, Sweden. pixel correlations (IC) and a deteriorated modulation transfer function (MTF).
    [Show full text]
  • Chapter 2: Design of Dic Measurements Sec
    SAND2020-9051 TR (slides) SAND2020-9046 TR (videos) CHAPTER 2: DESIGN OF DIC MEASUREMENTS SEC. 2.1: MEASUREMENT REQUIREMENTS 2 Quantity-of-Interest (QOI), Region-of-Interest (ROI), and Field-of-View (FOV) Sec. 2.1.1 – Sec. 2.1.3 1. Determine the QOIs ▸Examples include: shape, displacement, velocity, acceleration, strain, strain-rate, etc. ▸Application specific: ▸Strain field near hole or necking region? ▸Displacements at grips? 2. Select the ROI of the test piece 3. Determine the required FOV, based on the ROI ▸Recommendation 2.1: ROI should fill FOV, accounting for anticipated motion 3 2D-DIC vs Stereo-DIC Sec. 2.1.5 2D-DIC: ▸One camera, perpendicular to a planar test piece ▸Gives in-plane displacements and strains ▸Caution 2.1: Test piece should be planar and perpendicular to camera, and remain so during the test ▸Recommendation 2.3: Estimate errors due to out-of-plane motion ℎ 푚1 ℎ 푚2 Schematic top view of experimental setup tan 훼 = = tan 훼 = = 1 푑 푧 2 푑 + Δ푑 푧 Test piece 1 1 Camera 푚 − 푚 푑 detector False Strain ≈ 2 1 = 1 − 1 Pin hole 푚1 푑1 + ∆푑 h h α2 α1 m2 m1 d1 Δd False Strain 250 mm 1 mm 0.4 % 500 mm 1 mm 0.2 % ∆d 1000 mm 1 mm 0.1 % Stand-off distance, d1 Image distance, z 4 2D-DIC: Telecentric lenses Sec. 2.2.1 ▸Recommendation 2.6: ▸Caution 2.5 ▸For 2D-DIC, bi-lateral telecentric lenses are recommended ▸Do not use telecentric lenses for stereo-DIC! ▸If a telecentric lens isn’t available, use a longer focal length lens ▸Caution! (not in Guide) ▸False strains may still occur from out-of-plane Standard lens: rotations, even with a telecentric lens.
    [Show full text]
  • Telecentric Lenses Telecentric Lenses Telecentric
    Telecentric Lenses Telecentric Lenses Telecentric • Zoom telecentric system. • Single-Sided telecentric lenses. • Single-Sided telecentric accessories. • Video telecentric lenses. • Double-Sided telecentric lenses. • Double-Sided telecentric illumination accessories. LASER COMPONENTS S.A.S. 45 Bis Route des Gardes, 92190 Meudon - France, Phone: +33 (0)1 3959 5225, Fax: +33 (0)1 3959 5350, [email protected] Telecentric Lenses Telecentric Lenses Telecentric Navitar Telecentric Lenses Navitar off ers a family of high-performance telecentric lenses for use in machine vision, metrology and precision gauging applications. All provide low optical distortion and a high degree of telecentricity for maximum, accurate im- age reproduction, particularly when viewing three-dimensional objects. Benefi ts of Telecentric Lenses One of the most important benefi ts of a telecentric lens is that image magnifi ca- tion does not change as object distance varies. A telecentric lens views and displays the entire object from the same prospective angle, therefore, three-di- mensional features will not exhibit the perspective distortion and image position errors present when using a standard lens. Objects inside deep holes are visible throughout the fi eld, undistorted, therefore, telecentric lenses are extremely useful for inspecting three-dimensional objects or scenes where image size and shape accuracy are critical. Choose the Best Lens Option for Your Needs Our machine vision telecentric lenses include the 12X Telecentric Zoom system, the TC-5028 and TEC-M55 telecentric video lens. Our single-sided lenses, the In- varigon-R™, Macro Invaritar™, ELWD Macro Invaritar™ and Invaritar Large Field telecentric lenses are avail- able with several diff erent accessories that enhance the performance of the lenses.
    [Show full text]
  • The Basic Purpose of a Lens of Any Kind Is to Collect the Light Scattered by an Object and Recreate an Image of the Object on A
    Optics he basic purpose of a lens of any kind is to collect the light scattered by an object and recreate Tan image of the object on a light-sensitive ‘sensor’ (usually CCD or CMOS based). A certain number of parameters must be considered when choosing optics, depending on the area that must be imaged (field of view), the thickness of the object or features of interest (depth of field), the lens to object distance (working distance), the intensity of light, the optics type (telecentric/entocentric/pericentric), etc. The following list includes the fundamental parameters that must be evaluated in optics • Field of View (FoV): total area that can be viewed by the lens and imaged onto the camera sensor. • Working distance (WD): object to lens distance where the image is at its sharpest focus. • Depth of Field (DoF): maximum range where the object appears to be in acceptable focus. • Sensor size: size of the camera sensor’s active area. This can be easily calculated by multiplying the pixel size by the sensor resolution (number of active pixels in the x and y direction). • Magnification: ratio between sensor size and FoV. • Resolution: minimum distance between two points that can still be distinguished as separate points. Resolution is a complex parameter, which depends primarily on the lens and camera resolution. www.opto-engineering.com Optics basics Lens approximations and equations he main features of most optical systems can be calculated with a few parameters, provided that some approximation is accepted. TThe paraxial approximation requires that only rays entering the optical system at small angles with respect to the optical axis are taken into account.
    [Show full text]
  • A Comprehensive and Versatile Camera Model for Cameras with Tilt Lenses
    Int J Comput Vis (2017) 123:121–159 DOI 10.1007/s11263-016-0964-8 A Comprehensive and Versatile Camera Model for Cameras with Tilt Lenses Carsten Steger1 Received: 14 March 2016 / Accepted: 30 September 2016 / Published online: 22 October 2016 © The Author(s) 2016. This article is published with open access at Springerlink.com Abstract We propose camera models for cameras that are Keywords Camera models · Tilt lenses · Scheimpflug equipped with lenses that can be tilted in an arbitrary direc- optics · Camera model degeneracies · Camera calibration · tion (often called Scheimpflug optics). The proposed models Bias removal · Stereo rectification are comprehensive: they can handle all tilt lens types that are in common use for machine vision and consumer cameras and correctly describe the imaging geometry of lenses for 1 Introduction which the ray angles in object and image space differ, which is true for many lenses. Furthermore, they are versatile since One problem that often occurs when working on machine they can also be used to describe the rectification geometry vision applications that require large magnifications is that of a stereo image pair in which one camera is perspective and the depth of field becomes progressively smaller as the mag- the other camera is telecentric. We also examine the degen- nification increases. Since for regular lenses the depth of field eracies of the models and propose methods to handle the is parallel to the image plane, problems frequently occur if degeneracies. Furthermore, we examine the relation of the objects that are not parallel to the image plane must be imaged proposed camera models to different classes of projective in focus.
    [Show full text]
  • The Advantages of Telecentricity
    United States Change EDMUND OPTICS EDMUND SCIENTIFIC Log In | Catalog Request | Help Center | Contact Us | 1­800­363­1992 0 PRODUCTS CUSTOM RESOURCES COMPANY My Account TECHSPEC® Imaging Lenses are going through some name updates! Check out what your favorite lenses are now named. | Learn More Resources / Application Notes / Imaging / The Advantages of Telecentricity Request an Imaging The Advantages of Telecentricity Optics Catalog With over 800 This is Section 5.1 of the Imaging Resource Guide Imaging Lenses and 40 pages of in depth Previous Section Next Section Technical Content, make Edmund Optics Your Partner in Imaging Optics. The ability to quickly perform repeatable, high accuracy measurements is critical to maximize the performance of many machine vision systems. For such systems, a Telecentric Lenses telecentric lens allows the highest possible accuracy to be obtained. This section Large format, discusses the unique performance characteristics of Telecentric Lenses and how compact, in­line, telecentricity can impact system performance. focus adjustment, or iris adjustment options available for Zero Angular Field of View: Parallax Error Elimination your machine vision or inspection application. Conventional lenses have angular fields of view such that as the distance between the lens and object increases, the magnification decreases. This is how the human vision EO Imaging Lab behaves, and contributes to our depth perception. This angular field of view results in parallax, also known as perspective error, Free training videos which decreases accuracy, as the observed measurement of the vision system will change if the object is moved (even when with imaging and remaining within the depth of field) due to the magnification change.
    [Show full text]
  • Oslo-Optics-Reference.Pdf
    Optics Software for Layout and Optimization Optics Reference Lambda Research Corporation 25 Porter Road Littleton, MA 01460 USA [email protected] www.lambdares.com 2 COPYRIGHT COPYRIGHT The OSLO software and Optics Reference are Copyright © 2011 by Lambda Research Corporation. All rights reserved. This software is provided with a single user license. It may only be used by one user and on one computer at a time. The OSLO Optics Reference contains proprietary information. This information as well as the rest of the Optics Reference may not be copied in whole or in part, or reproduced by any means, or transmitted in any form without the prior written consent of Lambda Research Corporation. TRADEMARKS Oslo ® is a registered trademark of Lambda Research Corporation. TracePro ® is a registered trademark of Lambda Research Corporation. Pentium ® is a registered trademark of Intel, Inc. UltraEdit is a trademark of IDM Computer Solutions, Inc. Windows ® 95, Windows ® 98, Windows NT ®, Windows ® 2000, Windows XP, Windows Vista, Windows 7 and Microsoft ® are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries Table of Contents 3 Table of Contents COPYRIGHT................................................................................................................................2 Table of Contents..........................................................................................................................3 Chapter 1 Quick Start ................................................................................................................8
    [Show full text]
  • Bi Telecentric Objectives
    Article number 100-BTC-003 100-BTC-005 100-BTC-006 100-BTC-008 Magnification 0.03 x 0.05 x 0.06 x 0.08 x Field of View 1/2” Chip mm 218 x 163 125 x 94.1 106.7 x 80 79 x 59 Field of View 1/1.8” Chip mm 242 x 180 139 x 104 118 x 89 88 x 65 Field of View 2/3” Chip mm 290 x 218 * 167 x 125 * 142 x 106 * 105 x 79 * Distortion % 0.05 0.07 0.07 0.07 Telecentricity ° 0.05 0.08 0.08 0.08 Resolution (MTF@70LP/mm) % 50 50 50 50 Aperture F/# 8 8 8 8 Bi Telecentric Field Depth mm 780 253 186 101 Working Distance mm 732 416 355 256 Diameter x length mm 306 x 854 205 x 502 179 x 446 141 x 346 Objectives Clamp diameter mm 220 110 110 63 Clamp length mm 136.7 120.7 120.8 54.5 Camera thread C-Mount C-Mount C-Mount C-Mount Article number 100-BTC-010 100-BTC-016 100-BTC-020 100-BTC-032 Magnification 0.1 x 0.16 x 0.2 x 0.32 x High accuracy optical measurement Field of View 1/2” Chip mm 65 x 49 40.7 x 30.5 32 x 24 20.1 x 15.1 Mechanical part inspection Field of View 1/1.8” Chip mm 72 x 54 45 x 33.8 35 x 27 22.3 x 16.7 Field of View 2/3” Chip mm 86 x 65 * 54 x 40.5 * 42.5 x 32 * 26.7 x 20 * Plastic part inspection Images taken with Distortion % 0.07 0.08 0.08 0.08 Standard Objective (left) and Pharmaceutical device inspection Bi-Telecentric Objective (right) Telecentricity ° 0.08 0.08 0.08 0.08 Resolution (MTF@70LP/mm) % 50 50 45 45 Glass component inspection Aperture F/# 8 8 8 8 Electronic device inspection Field Depth mm 68 27 17 7 Working Distance mm 210 150 103 71 High definition print inspection Diameter x length mm 122 x 296 79 x 236 68 x 171 46 x 146 Photolithography mask measurement Clamp diameter mm 63 63 63 40 Clamp length mm 54.5 54.5 54.5 52 Cellular imaging Camera thread C-Mount C-Mount C-Mount C-Mount Specifications are subject to change without prior notice * Slightly vignetting possible For more details visit www.opto.de / btc 080115 With standard non telecentric lenses, With an Opto bi-telecentric lens, Opto GmbH Opto UK Ltd Opto France S.A.S.
    [Show full text]
  • Tutorial of Telecentric Lens
    Tutorial of Telecentric Lens Haosheng Hu College of Optical Sciences University of Arizona December 6, 2016 Introduction: Nowadays, high accuracy and performance of critical to many machine vision system, a method that use telecentric lens allows highest possible accuracy to be obtained. This tutorial introduced the concept and set up for the object-space telecenric system and image-space telecentric system and advantages of telecentric lens that can impact the system performance. Also, some recent advanced product of telecentric lens or systems in optical field are provided. Concept of Telecentric Lens: The definition of telecentric lens is that a compound lens with entrance pupil or exit pupil at infinity distance, which can provide orthographic view of the subject. Because of the infinity entrance pupil or exit pupil, chief ray, which pass through the center of aperture stop will parallel to the optical axis in front of or behind the optical system. Commonly, there are two kinds of telecentric, one is objective-space telecentric, another one is image-space telecentric. Object-space Telecentric System: For the object-space telecentric system, the entrance pupil of this system at infinity. The easiest way to make an object-space telecentric system is placing the stop at the rear focal plane and maintain the entrance pupil at infinity. Normally, the image plane or detector is placed, which means nominal object position is in focus. The concept drawing of object-space telecentric system is shown in Fig 1. Fig 1. Concept Drawing of Object-space Telecentric System When the object is displaced from the nominal object position, the location and size of the resulting image changes.
    [Show full text]
  • Best of EO App Notes
    LEARNING – UNDERSTANDING – INTRODUCING – APPLYING 7 Big Ideas To Understanding Imaging Systems APPLICATION NOTES Basics Of Digital Camera Settings For Improved Imaging Results Telecentricity And Telecentric Lenses In Machine Vision Camera Resolution For Improved Imaging System Performance Choose The Correct Illumination Understanding Camera Sensors For Machine Vision Applications What Is SWIR? Camera Types And Interfaces For Machine Vision Applications www.edmundoptics.com BASICS OF DIGITAL CAMERA SETTINGS FOR IMPROVED IMAGING RESULTS Digital cameras, compared to their analog counterparts, offer filters and simple image processing (in the case of smart cam- greater flexibility in allowing the user to adjust camera set- eras). Camera firmware encompasses the FPGA and on board tings through acquisition software. In some cases, the settings memory; firmware updates are occasionally available for cam- in analog cameras can be adjusted through hardware such as eras, adding and improving features. The on board memory in dual in-line package (DIP) switches or RS-232 connections. digital cameras allows for storage of settings, look up tables, Nevertheless, the flexibility of modifying settings through the buffering for high transfer rates, and multi-camera network- software greatly adds to increased image quality, speed, and ing with ethernet switches. Some of the most common digital contrast - factors that could mean the difference between ob- camera settings are gain, gamma, area of interest, binning/ serving a defect and missing it altogether. Many digital cam- subsampling, pixel clock, offset, and triggering. Understanding eras have on board field-programmable gate arrays (FPGAs) these basic settings will help to achieve the best results for a for digital signal processing and camera functions.
    [Show full text]
  • On Telecentric Lenses
    Applied Color Science Technical Note High Resolution Image Sensors and Lens Performance- The recent development of high quality multi-megapixel image sensors has revolutionized every aspect of image capture, processing and display. However, for many photographers, both amateur and professional, the transition from silver halide film to silicon-based sensor arrays has not been entirely seamless. One area where the differences between film-based and digital photography have been especially confusing is with regard to lens performance. After all, if I have a lens that gave me great images on my film SLR, it should work even better on my digital SLR, right? Not necessarily. This article attempts to clarify some of the lens design and image sensor design issues that can be critical for digital camera performance. Hopefully after reading this, you will be able to better understand why some lenses work better with digital cameras and what to look for in a digital camera lens. Pixel Design- The basic imaging unit of a digital image sensor is the pixel. What is a pixel? It is nothing more than a tiny area of a silicon device that has been treated to be sensitive to light. (See Figure 1.) Figure 1 – Pixels = Photodiodes + Storage When light hits the surface of an image sensor, the pixels react to that light by producing electrons. The brighter the light, the more electrons are produced. These electrons are then converted into an electrical signal that make up the image we see. Applied Color Science, Inc. – 2009 Confidential and Proprietary A modern image sensor used in digital photography is made up of millions of these pixels arranged into rows and columns (typically) for ease of readout and processing.
    [Show full text]