Correcting the Chromatic Aberration in Barrel Distortion of Endoscopic Images

Total Page:16

File Type:pdf, Size:1020Kb

Correcting the Chromatic Aberration in Barrel Distortion of Endoscopic Images Correcting the Chromatic Aberration in Barrel Distortion of Endoscopic Images Y. M. Harry Ng Department of Automation and Computer-Aided Engineering The Chinese University of Hong Kong [email protected] and C. P. Kwong Department of Automation and Computer-Aided Engineering The Chinese University of Hong Kong [email protected] ABSTRACT on least square estimation. Several researchers have proposed various mathematical models of image Modern endoscopes offer physicians a wide-angle field of distortion and techniques to find the model parameters for view (FOV) for minimally invasive therapies. However, distortion correction. For instance, Hideaki et al. [1] have the high level of barrel distortion may prevent accurate derived a moment matrix from a set of image points and perception of image. Fortunately, this kind of distortion proposed to straighten distorted grid lines in the image may be corrected by digital image processing. In this based on the smallest characteristic root of this matrix. paper we investigate the chromatic aberrations in the Asari et al. [2] have proposed a technique based on least barrel distortion of endoscopic images. In the past, squares estimation to find the coefficients of a correction chromatic aberration in endoscopes is corrected by polynomial, to which Helferty et al. [3] later provided an achromatic lenses or active lens control. In contrast, we improved method. Specifically they proposed a technique take a computational approach by modifying the concept which finds solutions in which all lines are parallel. of image warping and the existing barrel distortion Unlike previous approaches, both horizontal and vertical correction algorithm to tackle the chromatic aberration line fits are required to find the optimal distortion center problem. In addition, an error function for the and hence the overall solution. Nevertheless, the above determination of the level of centroid coincidence is methods concern only the geometric barrel distortion. In proposed. Simulation and experimental results confirm our case, we are interested in reducing the chromatic the effectiveness of our method. aberration in the barrel distortion of endoscopic images. Keywords: Barrel Distortion, Chromatic Aberration, For the correction of chromatic aberration, an achromatic Distortion Correction, Endoscopy, Centroid Estimation doublet consisting of two thin lenses that are separated or and Expansion Coefficients. in contact so that its focal length is independent of wavelength is used [4][5]. Normally one lens is 1. INTRODUCTION converging and the other is diverging, and they are often cemented together. As it is not possible to fully correct all Minimally Invasive Therapy is increasingly popular aberrations, combining two or more lenses together for because of the use of natural or artificial orifices of the the correction is required. High-quality lenses used in body for surgical operations, which minimizes the cameras, microscopes, and other devices are compound destruction of healthy tissues and organs. Electronic lenses consisting of many simple lenses. A typical endoscopy employing miniature CCD cameras plays an high-quality camera lens may contain six or more lenses. important role in the diagnostic and therapeutic On the other hand, Willson et al. [6] proposed active lens operations of Minimally Invasive Therapy. However, the control to correct the chromatic aberration. In the modern high level of barrel distortion introduces nonlinear endoscope, compound achromatic lenses for correcting changes in the image: areas near the distortion center are chromatic aberration are commonly used. compressed less and areas farther away from the center In this paper, we investigate the chromatic aberration in are compressed more. Because of this phenomenon, the the barrel distortion of endoscope. J. Garcia et al. [7] have outer areas of the image look significantly smaller than extracted three RGB color channels of an image for the the actual size. Fortunately, this kind of distortion may be measurement of the defocusing from the continuous light corrected by calculating the correction parameters based spectrum. Because of the depth of field, we assume the SYSTEMICS, CYBERNETICS AND INFORMATICS VOLUME 1 - NUMBER 2 81 images are focused already and the lateral chromatic colors from violet to red. Light with wavelength shorter aberration can be regarded as difference in magnification than 400 nm is called ultraviolet (UV), and light with for different colors. Indeed, the chromatic correction can wavelength greater than 750 nm is called infrared (IR). A be performed either using a theoretical model of prism separates white light into rainbow of colors. This aberration formation or using experimentally measured happens because the index of refraction of a material aberration values. However, it is practically impossible to depends on the wavelength. With reference to the concept describe contributions of all parts of a particular optical of correcting chromatic aberration using image warping system. Therefore, the use of experimentally measured from Boult et al. [8], the geometric distortions must be aberration values is a much better choice. Thus, we take computed and the red and green images must be fitted to three monochromatic light samples, which are regarded a cubic spline. Since the light is a continuous spectrum, as three RGB color channels. Then, we take a we can not deal only with three separate channels. In computational approach by modifying the concept of order to replace the achromatic lenses by computational image warping [8] and the existing barrel distortion correction, we need to take as many light samples as we correction algorithm by Helferty et al. [3] to tackle the can. In the meantime, we develop an error function which chromatic aberration problem. Once the correction is used to determine how good the fitting of a cubic polynomial coefficients are found, the correction of the spline in the chromatic aberration in barrel distortion of chromatic aberration in barrel distortion of endoscopic endoscopic images is. We assume that the tested white images can be carried out. As the polynomial expansion light source only consists of three monochromatic light coefficients are not changed with the distance from the sources and they are belonged to the three color channels. object to the camera lens, the expansion coefficients Thus, the mappings from the distortion space to the obtained at a certain distance can be used to correct the correction space for the tested white light source are, image captured at any distance within the depth of field range [9]. In addition, an error function for the N n , θ = θ ' determination of the level of centroid coincidence is rR = ∑anRr'R R R proposed. Finally, we use simulation and experimental n=1 results to confirm the effectiveness of our method. N n , θ = θ' rG = ∑anGr'G G G 2. THERORETICAL MODEL OF n=1 CORRECTION N n , θ = θ ' (2) There have been many studies on the computational rB = ∑anBr'B B B correction of barrel distortion. In this paper, we mainly n=1 focus on the method proposed by Helferty et al. [3], of where , and are the set of correction which the correction of the mapping from distortion anR anG anB space to correction space for black and white images is, expansion coefficients in red, green and blue color, respectively, and r' and r are the radii from the N distortion center to the point in the distortion space and n , θ = θ ' (1) ρ = ∑ an ρ' the correction space, respectively with its corresponding n=1 color. θ ' and θ are the angles of the point in the distortion space and the correction space, respectively where a is the set of correction expansion coefficients n with its corresponding color. The goal of the processing is of an endoscope, ρ ' and ρ are the radius from the to calculate the coefficients of a1R ,...anR , a1G ,...anG , distortion center to a point in the distortion space and the a ,...a . correction space, respectively, and θ ' and θ are the 1B nB angles of the point in the distortion space and the correction space, respectively. The basic experimental Although we have found the coefficients for three colors, procedure, as suggested by Helferty et al., is to compute the corrected images may not be coincided with each the correction expansion coefficients from an endoscopic other. In order to compensate the dispersion of the three video image of an image consisting of an equally spaced colors, we make fine adjustments of the correction array of black dots on a white background. Once this polynomial coefficients. Thus, an error function is now correction is computed, it can be applied to live being proposed. As the green color centroid is located in endoscopic video in real time. the middle of red and blue color centorids, we choose the green color centroid and the green color correction An important property of visible light is its color, which polynomial coefficient as the desired control point. Then, is related to the wavelengths or frequencies of light. we can finely adjust the correction polynomial Visible light sensitive to human eyes falls in the coefficients by the following proposed method. First, we wavelength range of about 400 nm to 750 nm. This is compute the distance of the farthest dot centroid in 45 known as the visible spectrum and represents different degree from the center point of the image as, 82 SYSTEMICS, CYBERNETICS AND INFORMATICS VOLUME 1 - NUMBER 2 1 N xc = x (5) c 2 c 2 (3) ij ∑ ijn D = (x ) + ( y ) N n where D is the distance of the farthest dot centroid in c where xij is the value in the x-direction of the centroid 45 degree from the center point of the image, c is the x of dot_ij (row i and column j), x is the value in the value of the dot centroid in the x-direction and yc is the ijn x-direction of the dot_ij elements and N is the number value of the dot centroid in the y-direction.
Recommended publications
  • Breaking Down the “Cosine Fourth Power Law”
    Breaking Down The “Cosine Fourth Power Law” By Ronian Siew, inopticalsolutions.com Why are the corners of the field of view in the image captured by a camera lens usually darker than the center? For one thing, camera lenses by design often introduce “vignetting” into the image, which is the deliberate clipping of rays at the corners of the field of view in order to cut away excessive lens aberrations. But, it is also known that corner areas in an image can get dark even without vignetting, due in part to the so-called “cosine fourth power law.” 1 According to this “law,” when a lens projects the image of a uniform source onto a screen, in the absence of vignetting, the illumination flux density (i.e., the optical power per unit area) across the screen from the center to the edge varies according to the fourth power of the cosine of the angle between the optic axis and the oblique ray striking the screen. Actually, optical designers know this “law” does not apply generally to all lens conditions.2 – 10 Fundamental principles of optical radiative flux transfer in lens systems allow one to tune the illumination distribution across the image by varying lens design characteristics. In this article, we take a tour into the fascinating physics governing the illumination of images in lens systems. Relative Illumination In Lens Systems In lens design, one characterizes the illumination distribution across the screen where the image resides in terms of a quantity known as the lens’ relative illumination — the ratio of the irradiance (i.e., the power per unit area) at any off-axis position of the image to the irradiance at the center of the image.
    [Show full text]
  • “Digital Single Lens Reflex”
    PHOTOGRAPHY GENERIC ELECTIVE SEM-II DSLR stands for “Digital Single Lens Reflex”. In simple language, a DSLR is a digital camera that uses a mirror mechanism to either reflect light from a camera lens to an optical viewfinder (which is an eyepiece on the back of the camera that one looks through to see what they are taking a picture of) or let light fully pass onto the image sensor (which captures the image) by moving the mirror out of the way. Although single lens reflex cameras have been available in various shapes and forms since the 19th century with film as the recording medium, the first commercial digital SLR with an image sensor appeared in 1991. Compared to point-and-shoot and phone cameras, DSLR cameras typically use interchangeable lenses. Take a look at the following image of an SLR cross section (image courtesy of Wikipedia): When you look through a DSLR viewfinder / eyepiece on the back of the camera, whatever you see is passed through the lens attached to the camera, which means that you could be looking at exactly what you are going to capture. Light from the scene you are attempting to capture passes through the lens into a reflex mirror (#2) that sits at a 45 degree angle inside the camera chamber, which then forwards the light vertically to an optical element called a “pentaprism” (#7). The pentaprism then converts the vertical light to horizontal by redirecting the light through two separate mirrors, right into the viewfinder (#8). When you take a picture, the reflex mirror (#2) swings upwards, blocking the vertical pathway and letting the light directly through.
    [Show full text]
  • Estimation and Correction of the Distortion in Forensic Image Due to Rotation of the Photo Camera
    Master Thesis Electrical Engineering February 2018 Master Thesis Electrical Engineering with emphasis on Signal Processing February 2018 Estimation and Correction of the Distortion in Forensic Image due to Rotation of the Photo Camera Sathwika Bavikadi Venkata Bharath Botta Department of Applied Signal Processing Blekinge Institute of Technology SE–371 79 Karlskrona, Sweden This thesis is submitted to the Department of Applied Signal Processing at Blekinge Institute of Technology in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering with Emphasis on Signal Processing. Contact Information: Author(s): Sathwika Bavikadi E-mail: [email protected] Venkata Bharath Botta E-mail: [email protected] Supervisor: Irina Gertsovich University Examiner: Dr. Sven Johansson Department of Applied Signal Processing Internet : www.bth.se Blekinge Institute of Technology Phone : +46 455 38 50 00 SE–371 79 Karlskrona, Sweden Fax : +46 455 38 50 57 Abstract Images, unlike text, represent an effective and natural communica- tion media for humans, due to their immediacy and the easy way to understand the image content. Shape recognition and pattern recog- nition are one of the most important tasks in the image processing. Crime scene photographs should always be in focus and there should be always be a ruler be present, this will allow the investigators the ability to resize the image to accurately reconstruct the scene. There- fore, the camera must be on a grounded platform such as tripod. Due to the rotation of the camera around the camera center there exist the distortion in the image which must be minimized.
    [Show full text]
  • Effects on Map Production of Distortions in Photogrammetric Systems J
    EFFECTS ON MAP PRODUCTION OF DISTORTIONS IN PHOTOGRAMMETRIC SYSTEMS J. V. Sharp and H. H. Hayes, Bausch and Lomb Opt. Co. I. INTRODUCTION HIS report concerns the results of nearly two years of investigation of T the problem of correlation of known distortions in photogrammetric sys­ tems of mapping with observed effects in map production. To begin with, dis­ tortion is defined as the displacement of a point from its true position in any plane image formed in a photogrammetric system. By photogrammetric systems of mapping is meant every major type (1) of photogrammetric instruments. The type of distortions investigated are of a magnitude which is considered by many photogrammetrists to be negligible, but unfortunately in their combined effects this is not always true. To buyers of finished maps, you need not be alarmed by open discussion of these effects of distortion on photogrammetric systems for producing maps. The effect of these distortions does not limit the accuracy of the map you buy; but rather it restricts those who use photogrammetric systems to make maps to limits established by experience. Thus the users are limited to proper choice of flying heights for aerial photography and control of other related factors (2) in producing a map of the accuracy you specify. You, the buyers of maps are safe behind your contract specifications. The real difference that distortions cause is the final cost of the map to you, which is often established by competi­ tive bidding. II. PROBLEM In examining the problem of correlating photogrammetric instruments with their resultant effects on map production, it is natural to ask how large these distortions are, whose effects are being considered.
    [Show full text]
  • Model-Free Lens Distortion Correction Based on Phase Analysis of Fringe-Patterns
    sensors Article Model-Free Lens Distortion Correction Based on Phase Analysis of Fringe-Patterns Jiawen Weng 1,†, Weishuai Zhou 2,†, Simin Ma 1, Pan Qi 3 and Jingang Zhong 2,4,* 1 Department of Applied Physics, South China Agricultural University, Guangzhou 510642, China; [email protected] (J.W.); [email protected] (S.M.) 2 Department of Optoelectronic Engineering, Jinan University, Guangzhou 510632, China; [email protected] 3 Department of Electronics Engineering, Guangdong Communication Polytechnic, Guangzhou 510650, China; [email protected] 4 Guangdong Provincial Key Laboratory of Optical Fiber Sensing and Communications, Guangzhou 510650, China * Correspondence: [email protected] † The authors contributed equally to this work. Abstract: The existing lens correction methods deal with the distortion correction by one or more specific image distortion models. However, distortion determination may fail when an unsuitable model is used. So, methods based on the distortion model would have some drawbacks. A model- free lens distortion correction based on the phase analysis of fringe-patterns is proposed in this paper. Firstly, the mathematical relationship of the distortion displacement and the modulated phase of the sinusoidal fringe-pattern are established in theory. By the phase demodulation analysis of the fringe-pattern, the distortion displacement map can be determined point by point for the whole distorted image. So, the image correction is achieved according to the distortion displacement map by a model-free approach. Furthermore, the distortion center, which is important in obtaining an optimal result, is measured by the instantaneous frequency distribution according to the character of distortion automatically. Numerical simulation and experiments performed by a wide-angle lens are carried out to validate the method.
    [Show full text]
  • Depth of Field Lenses Form Images of Objects a Predictable Distance Away from the Lens. the Distance from the Image to the Lens Is the Image Distance
    Depth of Field Lenses form images of objects a predictable distance away from the lens. The distance from the image to the lens is the image distance. Image distance depends on the object distance (distance from object to the lens) and the focal length of the lens. Figure 1 shows how the image distance depends on object distance for lenses with focal lengths of 35 mm and 200 mm. Figure 1: Dependence Of Image Distance Upon Object Distance Cameras use lenses to focus the images of object upon the film or exposure medium. Objects within a photographic Figure 2 scene are usually a varying distance from the lens. Because a lens is capable of precisely focusing objects of a single distance, some objects will be precisely focused while others will be out of focus and even blurred. Skilled photographers strive to maximize the depth of field within their photographs. Depth of field refers to the distance between the nearest and the farthest objects within a photographic scene that are acceptably focused. Figure 2 is an example of a photograph with a shallow depth of field. One variable that affects depth of field is the f-number. The f-number is the ratio of the focal length to the diameter of the aperture. The aperture is the circular opening through which light travels before reaching the lens. Table 1 shows the dependence of the depth of field (DOF) upon the f-number of a digital camera. Table 1: Dependence of Depth of Field Upon f-Number and Camera Lens 35-mm Camera Lens 200-mm Camera Lens f-Number DN (m) DF (m) DOF (m) DN (m) DF (m) DOF (m) 2.8 4.11 6.39 2.29 4.97 5.03 0.06 4.0 3.82 7.23 3.39 4.95 5.05 0.10 5.6 3.48 8.86 5.38 4.94 5.07 0.13 8.0 3.09 13.02 9.93 4.91 5.09 0.18 22.0 1.82 Infinity Infinite 4.775 5.27 0.52 The DN value represents the nearest object distance that is acceptably focused.
    [Show full text]
  • What's a Megapixel Lens and Why Would You Need One?
    Theia Technologies white paper Page 1 of 3 What's a Megapixel Lens and Why Would You Need One? It's an exciting time in the security department. You've finally received approval to migrate from your installed analog cameras to new megapixel models and Also translated in Russian: expectations are high. As you get together with your integrator, you start selecting Что такое мегапиксельный the cameras you plan to install, looking forward to getting higher quality, higher объектив и для чего он resolution images and, in many cases, covering the same amount of ground with нужен one camera that, otherwise, would have taken several analog models. You're covering the basics of those megapixel cameras, including the housing and mounting hardware. What about the lens? You've just opened up Pandora's Box. A Lens Is Not a Lens Is Not a Lens The lens needed for an IP/megapixel camera is much different than the lens needed for a traditional analog camera. These higher resolution cameras demand higher quality lenses. For instance, in a megapixel camera, the focal plane spot size of the lens must be comparable or smaller than the pixel size on the sensor (Figures 1 and 2). To do this, more elements, and higher precision elements, are required for megapixel camera lenses which can make them more costly than their analog counterparts. Figure 1 Figure 2 Spot size of a megapixel lens is much smaller, required Spot size of a standard lens won't allow sharp focus on for good focus on megapixel sensors. a megapixel sensor.
    [Show full text]
  • EVERYDAY MAGIC Bokeh
    EVERYDAY MAGIC Bokeh “Our goal should be to perceive the extraordinary in the ordinary, and when we get good enough, to live vice versa, in the ordinary extraordinary.” ~ Eric Booth Welcome to Lesson Two of Everyday Magic. In this week’s lesson we are going to dig deep into those magical little orbs of light in a photograph known as bokeh. Pronounced BOH-Kə (or BOH-kay), the word “bokeh” is an English translation of the Japanese word boke, which means “blur” or “haze”. What is Bokeh? bokeh. And it is the camera lens and how it renders the out of focus light in Photographically speaking, bokeh is the background that gives bokeh its defined as the aesthetic quality of the more or less circular appearance. blur produced by the camera lens in the out-of-focus parts of an image. But what makes this unique visual experience seem magical is the fact that we are not able to ‘see’ bokeh with our superior human vision and excellent depth of field. Bokeh is totally a function ‘seeing’ through the lens. Playing with Focal Distance In addition to a shallow depth of field, the bokeh in an image is also determined by 1) the distance between the subject and the background and 2) the distance between the lens and the subject. Depending on how you Bokeh and Depth of Field compose your image, the bokeh can be smooth and ‘creamy’ in appearance or it The key to achieving beautiful bokeh in can be livelier and more energetic. your images is by shooting with a shallow depth of field (DOF) which is the amount of an image which is appears acceptably sharp.
    [Show full text]
  • Using Depth Mapping to Realize Bokeh Effect with a Single Camera Android Device EE368 Project Report Authors (SCPD Students): Jie Gong, Ran Liu, Pradeep Vukkadala
    Using Depth Mapping to realize Bokeh effect with a single camera Android device EE368 Project Report Authors (SCPD students): Jie Gong, Ran Liu, Pradeep Vukkadala Abstract- In this paper we seek to produce a bokeh Bokeh effect is usually achieved in high end SLR effect with a single image taken from an Android device cameras using portrait lenses that are relatively large in size by post processing. Depth mapping is the core of Bokeh and have a shallow depth of field. It is extremely difficult effect production. A depth map is an estimate of depth to achieve the same effect (physically) in smart phones at each pixel in the photo which can be used to identify which have miniaturized camera lenses and sensors. portions of the image that are far away and belong to However, the latest iPhone 7 has a portrait mode which can the background and therefore apply a digital blur to the produce Bokeh effect thanks to the dual cameras background. We present algorithms to determine the configuration. To compete with iPhone 7, Google recently defocus map from a single input image. We obtain a also announced that the latest Google Pixel Phone can take sparse defocus map by calculating the ratio of gradients photos with Bokeh effect, which would be achieved by from original image and reblured image. Then, full taking 2 photos at different depths to camera and defocus map is obtained by propagating values from combining then via software. There is a gap that neither of edges to entire image by using nearest neighbor method two biggest players can achieve Bokeh effect only using a and matting Laplacian.
    [Show full text]
  • Aperture Efficiency and Wide Field-Of-View Optical Systems Mark R
    Aperture Efficiency and Wide Field-of-View Optical Systems Mark R. Ackermann, Sandia National Laboratories Rex R. Kiziah, USAF Academy John T. McGraw and Peter C. Zimmer, J.T. McGraw & Associates Abstract Wide field-of-view optical systems are currently finding significant use for applications ranging from exoplanet search to space situational awareness. Systems ranging from small camera lenses to the 8.4-meter Large Synoptic Survey Telescope are designed to image large areas of the sky with increased search rate and scientific utility. An interesting issue with wide-field systems is the known compromises in aperture efficiency. They either use only a fraction of the available aperture or have optical elements with diameters larger than the optical aperture of the system. In either case, the complete aperture of the largest optical component is not fully utilized for any given field point within an image. System costs are driven by optical diameter (not aperture), focal length, optical complexity, and field-of-view. It is important to understand the optical design trade space and how cost, performance, and physical characteristics are influenced by various observing requirements. This paper examines the aperture efficiency of refracting and reflecting systems with one, two and three mirrors. Copyright © 2018 Advanced Maui Optical and Space Surveillance Technologies Conference (AMOS) – www.amostech.com Introduction Optical systems exhibit an apparent falloff of image intensity from the center to edge of the image field as seen in Figure 1. This phenomenon, known as vignetting, results when the entrance pupil is viewed from an off-axis point in either object or image space.
    [Show full text]
  • 12 Considerations for Thermal Infrared Camera Lens Selection Overview
    12 CONSIDERATIONS FOR THERMAL INFRARED CAMERA LENS SELECTION OVERVIEW When developing a solution that requires a thermal imager, or infrared (IR) camera, engineers, procurement agents, and program managers must consider many factors. Application, waveband, minimum resolution, pixel size, protective housing, and ability to scale production are just a few. One element that will impact many of these decisions is the IR camera lens. USE THIS GUIDE AS A REFRESHER ON HOW TO GO ABOUT SELECTING THE OPTIMAL LENS FOR YOUR THERMAL IMAGING SOLUTION. 1 Waveband determines lens materials. 8 The transmission value of an IR camera lens is the level of energy that passes through the lens over the designed waveband. 2 The image size must have a diameter equal to, or larger than, the diagonal of the array. 9 Passive athermalization is most desirable for small systems where size and weight are a factor while active 3 The lens should be mounted to account for the back athermalization makes more sense for larger systems working distance and to create an image at the focal where a motor will weigh and cost less than adding the plane array location. optical elements for passive athermalization. 4 As the Effective Focal Length or EFL increases, the field 10 Proper mounting is critical to ensure that the lens is of view (FOV) narrows. in position for optimal performance. 5 The lower the f-number of a lens, the larger the optics 11 There are three primary phases of production— will be, which means more energy is transferred to engineering, manufacturing, and testing—that can the array.
    [Show full text]
  • AG-AF100 28Mm Wide Lens
    Contents 1. What change when you use the different imager size camera? 1. What happens? 2. Focal Length 2. Iris (F Stop) 3. Flange Back Adjustment 2. Why Bokeh occurs? 1. F Stop 2. Circle of confusion diameter limit 3. Airy Disc 4. Bokeh by Diffraction 5. 1/3” lens Response (Example) 6. What does In/Out of Focus mean? 7. Depth of Field 8. How to use Bokeh to shoot impressive pictures. 9. Note for AF100 shooting 3. Crop Factor 1. How to use Crop Factor 2. Foal Length and Depth of Field by Imager Size 3. What is the benefit of large sensor? 4. Appendix 1. Size of Imagers 2. Color Separation Filter 3. Sensitivity Comparison 4. ASA Sensitivity 5. Depth of Field Comparison by Imager Size 6. F Stop to get the same Depth of Field 7. Back Focus and Flange Back (Flange Focal Distance) 8. Distance Error by Flange Back Error 9. View Angle Formula 10. Conceptual Schema – Relationship between Iris and Resolution 11. What’s the difference between Video Camera Lens and Still Camera Lens 12. Depth of Field Formula 1.What changes when you use the different imager size camera? 1. Focal Length changes 58mm + + It becomes 35mm Full Frame Standard Lens (CANON, NIKON, LEICA etc.) AG-AF100 28mm Wide Lens 2. Iris (F Stop) changes *distance to object:2m Depth of Field changes *Iris:F4 2m 0m F4 F2 X X <35mm Still Camera> 0.26m 0.2m 0.4m 0.26m 0.2m F4 <4/3 inch> X 0.9m X F2 0.6m 0.4m 0.26m 0.2m Depth of Field 3.
    [Show full text]