Lesson: Focus-Stacking the Perfect
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Camera System Considerations for Geomorphic Applications of Sfm Photogrammetry Adam R
University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln USGS Staff -- ubP lished Research US Geological Survey 2017 Camera system considerations for geomorphic applications of SfM photogrammetry Adam R. Mosbrucker US Geological Survey, [email protected] Jon J. Major US Geological Survey Kurt R. Spicer US Geological Survey John Pitlick University of Colorado Follow this and additional works at: http://digitalcommons.unl.edu/usgsstaffpub Part of the Geology Commons, Oceanography and Atmospheric Sciences and Meteorology Commons, Other Earth Sciences Commons, and the Other Environmental Sciences Commons Mosbrucker, Adam R.; Major, Jon J.; Spicer, Kurt R.; and Pitlick, John, "Camera system considerations for geomorphic applications of SfM photogrammetry" (2017). USGS Staff -- Published Research. 1007. http://digitalcommons.unl.edu/usgsstaffpub/1007 This Article is brought to you for free and open access by the US Geological Survey at DigitalCommons@University of Nebraska - Lincoln. It has been accepted for inclusion in USGS Staff -- ubP lished Research by an authorized administrator of DigitalCommons@University of Nebraska - Lincoln. EARTH SURFACE PROCESSES AND LANDFORMS Earth Surf. Process. Landforms 42, 969–986 (2017) Published 2016. This article is a U.S. Government work and is in the public domain in the USA Published online 3 January 2017 in Wiley Online Library (wileyonlinelibrary.com) DOI: 10.1002/esp.4066 Camera system considerations for geomorphic applications of SfM photogrammetry Adam R. Mosbrucker,1* Jon J. Major,1 Kurt R. Spicer1 and John Pitlick2 1 US Geological Survey, Vancouver, WA USA 2 Geography Department, University of Colorado, Boulder, CO USA Received 17 October 2014; Revised 11 October 2016; Accepted 12 October 2016 *Correspondence to: Adam R. -
Schneider-Kreuznach Erweitert Seine F-Mount-Objektiv-Familie
FAQ Century Film & Video 1. What is Angle of View? 2. What is an Achromat Diopter? 3. How is an Achromat different from a standard close up lens? 4. What is a matte box? 5. What is a lens shade? 6. When do you need a lens shade or a matte box? 7. What is aspect ratio? 8. What is 4:3? 9. What is 16:9? 10. What is anamorphic video? 11. What is the difference between the Mark I and Mark II fisheye lenses? 12. What is anti-reflection coating? 13. How should I clean my lens? 14. Why should I use a bayonet Mount? 15. How much does my accessory lens weigh? 16. What is hyperfocal distance? 17. What is the Hyperfocal distance of my lens? 18. What is a T-Stop? 19. What is a PL mount? 1. What is Angle of View? Angle of view is a measure of how much of the scene a lens can view. A fisheye lens can see as much as 180 degrees and a telephoto lens might see as narrow an angle as 5 degrees. It is important to distinguish horizontal angle of view from the vertical angle of view. 2. What is an Achromat Diopter? An achromat diopter is a highly corrected two element close up lens that provides extremely sharp images edge to edge without prismatic color effects. 3. How is an Achromat different from a standard close up lens? Standard close-up lenses, or diopters, are single element lenses that allow the camera lens to focus more closely on small objects. -
Camera System Considerations for Geomorphic Applications of Sfm Photogrammetry Adam R
View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by DigitalCommons@University of Nebraska University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln USGS Staff -- ubP lished Research US Geological Survey 2017 Camera system considerations for geomorphic applications of SfM photogrammetry Adam R. Mosbrucker US Geological Survey, [email protected] Jon J. Major US Geological Survey Kurt R. Spicer US Geological Survey John Pitlick University of Colorado Follow this and additional works at: http://digitalcommons.unl.edu/usgsstaffpub Part of the Geology Commons, Oceanography and Atmospheric Sciences and Meteorology Commons, Other Earth Sciences Commons, and the Other Environmental Sciences Commons Mosbrucker, Adam R.; Major, Jon J.; Spicer, Kurt R.; and Pitlick, John, "Camera system considerations for geomorphic applications of SfM photogrammetry" (2017). USGS Staff -- Published Research. 1007. http://digitalcommons.unl.edu/usgsstaffpub/1007 This Article is brought to you for free and open access by the US Geological Survey at DigitalCommons@University of Nebraska - Lincoln. It has been accepted for inclusion in USGS Staff -- ubP lished Research by an authorized administrator of DigitalCommons@University of Nebraska - Lincoln. EARTH SURFACE PROCESSES AND LANDFORMS Earth Surf. Process. Landforms 42, 969–986 (2017) Published 2016. This article is a U.S. Government work and is in the public domain in the USA Published online 3 January 2017 in Wiley Online Library (wileyonlinelibrary.com) DOI: 10.1002/esp.4066 Camera system considerations for geomorphic applications of SfM photogrammetry Adam R. Mosbrucker,1* Jon J. Major,1 Kurt R. Spicer1 and John Pitlick2 1 US Geological Survey, Vancouver, WA USA 2 Geography Department, University of Colorado, Boulder, CO USA Received 17 October 2014; Revised 11 October 2016; Accepted 12 October 2016 *Correspondence to: Adam R. -
Hyperfocal Distance
Advanced Photography Topic – 5 - Optics – Hyper Focal Distance Learning Outcomes In this lesson, we will look at what the hyper focal distance means and how it is important in relation to optics in digital photography. Knowing the hyperfocal distance for a given focal length and aperture can be challenging but we will go through this step by step so that you have a better understanding of this important concept. Page | 1 Advanced Photography Hyperfocal Distance Focusing your camera at the hyperfocal distance guarantees maximum sharpness from half of this distance all the way to infinity. This hyperfocal distance is very useful in landscape photography, because it allows you to make the best use of your depth of field. This produces a more detailed final image. Where Do You Find This Hyperfocal Distance? The hyperfocal distance is defined as the focus distance which places the furthest edge of a depth of field at infinity. If we were to focus any closer than this, even by the slightest amount, then a distant background will appear unacceptably soft or ‘unsharp’. Alternatively, if one were to focus any farther than this, then the most distant portion of the depth of field would be out of focus. The best way to optimize your focusing distance is visually. Begin by first focusing on the most distant object within your scene, then manually adjust the focusing distance as close as possible while still retaining an acceptably sharp background. If your scene has distant objects near the horizon, then this focusing distance will closely approximate the hyper focal distance. -
Optical Design of a Novel Collimator System with a Variable Virtual-Object Distance for an Inspection Instrument of Mobile Phone Camera Optics
applied sciences Article Optical Design of a Novel Collimator System with a Variable Virtual-Object Distance for an Inspection Instrument of Mobile Phone Camera Optics Hojong Choi 1, Se-woon Choe 1,2,* and Jaemyung Ryu 3,* 1 Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea; [email protected] 2 Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea 3 Department of Optical Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea * Correspondence: [email protected] (S.-w.C.); [email protected] (J.R.); Tel.: +82-54-478-7781 (S.-w.C.); +82-54-478-7778 (J.R.) Featured Application: Optical inspection instrument for a mobile phone camera. Abstract: The resolution performance of mobile phone camera optics was previously checked only near an infinite point. However, near-field performance is required because of reduced camera pixel sizes. Traditional optics are measured using a resolution chart located at a hyperfocal distance, which can only measure the resolution at a specific distance but not at close distances. We designed a new collimator system that can change the virtual image of the resolution chart from infinity to a short distance. Hence, some lenses inside the collimator systems must be moved. Currently, if the focusing lens is moved, chromatic aberration and field curvature occur. Additional lenses are required to Citation: Choi, H.; Choe, S.-w.; Ryu, correct this problem. However, the added lens must not change the characteristics of the proposed J. Optical Design of a Novel collimator. Therefore, an equivalent-lens conversion method was designed to maintain the first-order Collimator System with a Variable and Seidel aberrations. -
The Hyperfocal Distance ©2001 Reefnet Inc
The Hyperfocal Distance ©2001 ReefNet Inc. Consider the arrangement shown, consisting of a perfect converging lens, an aperture of diameter D, a point object A on the lens axis, and film. When the image of A is perfectly focussed at the film plane, some of the scene behind A and in front of A is also in focus. Suppose that A is very far from the lens (at 'infinity'). By definition, light rays from A will be focused at the point B on the lens' secondary focal plane, which is a distance F (the focal length) behind the center of the lens. If the film plane is positioned to contain point B, the film will record a perfectly focused image of A. Now bring A closer, to a finite distance O from the lens, but still far enough away so that O is much larger than F. The image recedes to a distance 'I' behind the lens, as shown, in accordance with the Gaussian (lens-maker's) formula 1 1 1 + = (1) O I F and the film plane must move with it to record a sharp image. With object A at this new position, any light rays coming from other objects behind it are focused somewhere between the film plane and B, with the most out-of-focus rays being those coming from the most distant objects. In particular, rays coming from infinity meet at B but continue on to form a circle of confusion at the film plane. By moving A back and forth, and moving the film accordingly to maintain a sharp image, the circle of confusion can be made larger or smaller. -
An Introduction to Depth of Field by Jeff Conrad for the Large Format Page
An Introduction to Depth of Field By Jeff Conrad for the Large Format Page In many types of photography, it is desirable to have the entire image sharp. Strictly speaking, this is impossible: a camera can precisely focus on only one plane; a point object in any other plane is imaged as a disk rather than a point, and the farther a plane is from the plane of focus, the larger the disk. This is illustrated in Figure 1. k u v ud vd Figure 1. In-Focus and Out-of-Focus Objects Following convention for optical diagrams, the object side of the lens is on the left, and the image side is on the right. The object in the plane of focus at distance u is sharply imaged as a point at distance v behind the lens. The object at distance ud would be sharply imaged at a distance vd behind the lens; however, at the focus distance v, it is imaged as a disk, known as a blur spot, of diameter k. Reducing the size of the aperture stop reduces the size of the blur spot, as shown in Figure 2. If the blur spot is sufficiently small, it is indistinguishable from a point, so that a zone of acceptable sharpness exists between two planes on either side of the plane of focus. This zone is known as the depth of field (DoF). The plane at un is the near limit of the DoF, and the plane at uf is the far limit of the DoF. The diameter of a “sufficiently small” blur spot is known as the acceptable circle of confusion, or simply as the circle of confusion (CoC). -
Depth of Field and Scheimpflug's Rule
Depth of field and Scheimpflug’s rule : a “minimalist” geometrical approach Emmanuel BIGLER September 6, 2002 ENSMM, 26 chemin de l’Epitaphe, F-25030 Besancon cedex, FRANCE, e-mail : [email protected] Abstract We show here how pure geometrical considerations with an absolute minimum of algebra will yield the solution for the position of slanted planes defining the limits of acceptable sharp- ness (an approximation valid for distant objects) for Depth-of-Field (DOF) combined with SCHEIMPFLUG’s rule. The problem of Depth-of-Focus is revisited using a similar approach. General formulae for Depth-Of-Field (DOF) are given in appendix, valid in the close-up range. The significance of the circle of least confusion, on which all DOF computations are based, even in the case of a tilted view camera lens and the choice of possible numerical values are also explained in detail in the appendix. Introduction We address here the question that immediately follows the application of SCHEIMPFLUG’s rule: when a camera is properly focused for a pair of object/image conjugated slanted planes (satisfying “SCHEIMPFLUG’s rules of 3 intersecting planes”), what is actually the volume in object space that will be rendered sharp, given a certain criterion of acceptable sharpness in the (slanted) image/film plane? Again the reader interested in a comprehensive, rigorous, mathematical study based on the ge- ometrical approach of the circle of confusion should refer to Bob WHEELER’S work [1] or the comprehensive review by Martin TAI [2]. A nice graphical explanation is presented by Leslie STROEBEL in his reference book [5], but no details are given. -
Hyperfocal Distance! Hyperfocalhyperfocal Distancedistance
TipsTips forfor betterbetter photosphotos LunchbiteLunchbite seriesseries DifferenceDifference betweenbetween DigitalDigital andand FilmFilm CameraCamera ImageImage SensorSensor – DC: Image Sensor (CCD, CMOS) – FC: Negative Film, Slide, etc… StorageStorage – DC: Memory Card (CF, SD, Memory Stick) – FC: Film roll InstantInstant DisplayDisplay – DC: Can view the photo from the LCD immediately – FC: You should develop the film first DifferenceDifference betweenbetween DigitalDigital andand FilmFilm CameraCamera (cont(cont’’)) SensorSensor SizeSize – DC: usually smaller (e.g. 7.2mm * 5.3 mm) – FC: 35mm film (35mm * 26mm) FilmFilm (Sensor)(Sensor) SpeedSpeed (i.e.(i.e. ISOISO 100,100, 200,200, 400,400, ……)) – DC: can change ISO anytime – FC: depends on the film used DifferenceDifference betweenbetween DigitalDigital andand FilmFilm CameraCamera (cont(cont’’)) LensLens – DC: smaller focal length (e.g. 7mm – 21 mm) – FC: larger focal length (e.g. 34mm – 102mm) ShutterShutter // ApertureAperture ShutterShutter SpeedSpeed – is the amount of time that light is allowed to pass through the aperture (e.g. 15s, 1s, 1/30s, 1/60s, 1/1000s, etc…) ApertureAperture – controls the amount of light that reaches the sensor – Diameter of an aperture is measured in f-stops (i.e. smaller the f-stop, larger the diameter) DepthDepth ofof FieldField DepthDepth ofof fieldfield (DOF)(DOF) – The amount of distance between the nearest and farthest objects that appear in acceptably sharp focus in a photo – Aperture, focal length, sensor size and subject distance also affect -
CS 448A, Winter 2010 Marc Levoy
Limitations of lenses CS 448A, Winter 2010 Marc Levoy Computer Science Department Stanford University Outline ✦ misfocus & depth of field ✦ aberrations & distortion ✦ veiling glare ✦ flare and ghost images ✦ vignetting ✦ diffraction 2 © 2010 Marc Levoy Circle of confusion (C) ✦ C depends on sensing medium, reproduction medium, viewing distance, human vision,... • for print from 35mm film, 0.02mm is typical • for high-end SLR, 6µ is typical (1 pixel) • larger if downsizing for web, or lens is poor 3 © 2010 Marc Levoy Depth of field formula yi si C M T = − yo so M T depth of field C depth of focus object image ✦ DoF is asymmetrical around the in-focus object plane ✦ conjugate in object space is typically bigger than C 4 © 2010 Marc Levoy Depth of field formula f C CU ≈ M T f depth of field C depth of focus U object image ✦ DoF is asymmetrical around the in-focus object plane ✦ conjugate in object space is typically bigger than C 5 © 2010 Marc Levoy Depth of field formula f C CU ≈ M T f depth of field C f N depth of focus D2 D1 U object image D f U − D NCU 2 NCU 2 1 = 1 ..... D = D = CU f / N 1 f 2 + NCU 2 f 2 − NCU 6 © 2010 Marc Levoy Depth of field formula 2NCU 2 f 2 D = D + D = TOT 1 2 f 4 − N 2C 2U 2 ✦ N 2 C 2 D 2 can be ignored when conjugate of circle of confusion is small relative to the aperture 2NCU 2 D ≈ TOT f 2 ✦ where • N is F-number of lens • C is circle of confusion (on image) • U is distance to in-focus plane (in object space) • f 7 is focal length of lens © 2010 Marc Levoy 2NCU 2 D ≈ TOT f 2 ✦ N = f/4.1 C = 2.5µ U = 5.9m (19’) ✦ 1 -
Extending Depth of Field Via Multifocus Fusion
University of Tennessee, Knoxville TRACE: Tennessee Research and Creative Exchange Doctoral Dissertations Graduate School 12-2011 Extending Depth of Field via Multifocus Fusion Harishwaran Hariharan [email protected] Follow this and additional works at: https://trace.tennessee.edu/utk_graddiss Part of the Signal Processing Commons Recommended Citation Hariharan, Harishwaran, "Extending Depth of Field via Multifocus Fusion. " PhD diss., University of Tennessee, 2011. https://trace.tennessee.edu/utk_graddiss/1187 This Dissertation is brought to you for free and open access by the Graduate School at TRACE: Tennessee Research and Creative Exchange. It has been accepted for inclusion in Doctoral Dissertations by an authorized administrator of TRACE: Tennessee Research and Creative Exchange. For more information, please contact [email protected]. To the Graduate Council: I am submitting herewith a dissertation written by Harishwaran Hariharan entitled "Extending Depth of Field via Multifocus Fusion." I have examined the final electronic copy of this dissertation for form and content and recommend that it be accepted in partial fulfillment of the requirements for the degree of Doctor of Philosophy, with a major in Electrical Engineering. Mongi Abidi, Major Professor We have read this dissertation and recommend its acceptance: Andreas Koschan, Seddik Djouadi, Frank Guess Accepted for the Council: Carolyn R. Hodges Vice Provost and Dean of the Graduate School (Original signatures are on file with official studentecor r ds.) Extending Depth of Field via Multifocus Fusion A dissertation presented for the Doctor of Philosophy Degree The University of Tennessee, Knoxville Harishwaran Hariharan December 2011 i Dedicated to the strength and courage of my mother, Mrs. P.V.Swarnakumari and with fond salutations to the cities of Coimbatore, Thamizhnadu (TN) & Knoxville, Tennessee (TN) and the wonderful people that live(d) there and love me. -
Advanced Photomicrography, 3D Model of Diatom by Stefano Barone, Diatom
Diatom Shop www.diatomshop.com offers Microslides and Arrangements with Diatoms, Radiolarians and other microscopic objects for optical microscopy and scanning electron microscopy (SEM), also Test slides for Optical microscopy are available Microscopic photography (or photomicrography) often has to deal with depth of field: for example if you want to capture a diatom - especially at medium or high magnification - you will realize that the single shot will hardly allow you to focus all the details of the frustule as the latter develops on different focal planes. Few know that in the thirties of the last century (and therefore when solely analogue photography) this problem was already addressed and solved by P. Lefébure ( Bulletin de la Societe Française de Microscopie , Vol. 1, 1932), who elaborated the Méthode des poses successives (technique with successive poses), which consisted of executing several shots on different focal planes, while maintaining the same photographic plate! A successful example of this technique appears on page 15 of the Panorama du Micro-monde, Technique Moderne de la Microphotographie by Louis-Jacques Laporte (Paris, 1949), where we find a beautiful photomicrograph of radiolaria in black and white with all the fine details. The digital revolution has made photography with the microscope undoubtedly easier and more or less within the reach of everyone, even if to produce beautiful and accurate images you must first of all have a certain mastery with this tool. It is also a duty to recognize that the concept underlying the Méthode des poses successives has never left us, even though eighty-seven years have elapsed! In fact, today we have the most refined focus stacking or Z-stacking: it is a digital photographic technique that uses a series of shots of the same subject taken on different levels of focus, which are processed by editing software (for example Helicon Focus, Zerene Stacker, Combine ZP and Adobe Photoshop in the CS4, CS5 and CS6 versions).