AG-AF100 28Mm Wide Lens

Total Page:16

File Type:pdf, Size:1020Kb

AG-AF100 28Mm Wide Lens Contents 1. What change when you use the different imager size camera? 1. What happens? 2. Focal Length 2. Iris (F Stop) 3. Flange Back Adjustment 2. Why Bokeh occurs? 1. F Stop 2. Circle of confusion diameter limit 3. Airy Disc 4. Bokeh by Diffraction 5. 1/3” lens Response (Example) 6. What does In/Out of Focus mean? 7. Depth of Field 8. How to use Bokeh to shoot impressive pictures. 9. Note for AF100 shooting 3. Crop Factor 1. How to use Crop Factor 2. Foal Length and Depth of Field by Imager Size 3. What is the benefit of large sensor? 4. Appendix 1. Size of Imagers 2. Color Separation Filter 3. Sensitivity Comparison 4. ASA Sensitivity 5. Depth of Field Comparison by Imager Size 6. F Stop to get the same Depth of Field 7. Back Focus and Flange Back (Flange Focal Distance) 8. Distance Error by Flange Back Error 9. View Angle Formula 10. Conceptual Schema – Relationship between Iris and Resolution 11. What’s the difference between Video Camera Lens and Still Camera Lens 12. Depth of Field Formula 1.What changes when you use the different imager size camera? 1. Focal Length changes 58mm + + It becomes 35mm Full Frame Standard Lens (CANON, NIKON, LEICA etc.) AG-AF100 28mm Wide Lens 2. Iris (F Stop) changes *distance to object:2m Depth of Field changes *Iris:F4 2m 0m F4 F2 X X <35mm Still Camera> 0.26m 0.2m 0.4m 0.26m 0.2m F4 <4/3 inch> X 0.9m X F2 0.6m 0.4m 0.26m 0.2m Depth of Field 3. Flange Back Adjustment Tolerance Changes 35mm Full Size: Tolerance <0.8mm f = 50mm @F22 -- > 4/3” : Tolerance <0.21mm f = 25mm @F11 Return Adjustment becomes more precise. 1-1.What happens? • If you have 50mm lens of 35mm full size lens (Canon, Nikon and so on.), what happens when you use it with 4/3 . The view of field becomes View of Field 2 times telephoto size. 50mm with 35mm Full Frame 50mm lens 25mm lens (Canon, Nikon and so on) with 4/3” camera with 4/3” camera Depth of Field The depth of field becomes deeper. Iris F5.6 fl. 50mm lens Iris F5.6 fl. 25mm lens Iris F2.8 with 4/3” camera with 35mm Full Frame with 4/3” camera Return 1-2. Focal length • Focal length changes if the imager size changes. • To get the same view angle (Horizontal angel). • If you compare 35mm Full Frame with another lens, – Standard View Angle: 39.6degree (Horizontal) • Wide Standard Tele • View Angle 65.5 ° 39.6° 10.3 ° • 35mm Full Frame: 28mm 50mm 200mm • ANSI Super 35mm: 19mm 35mm 138mm • Normal 35mm: 17mm 31mm 122mm • 4/3” : 14mm 25mm 99mm • 2/3” : 7.5mm 13.3mm 53mm • 1/3” : 4mm 7.2mm 29mm 35mm Full Size Super 35mm 4/3” 25mm 35mm Focal Length 50mm Return 1-3. Iris (F Stop) • If an imager size change, depth of field will change. – If F stop is the same, depth of field will change by a imager size – If you want to keep the same depth of field, you have to change F stop. *distance to object:2m *Iris:F4 <Focus area> 2m 0m <35mm Still Camera> X 0.4m F22 0.26m 0.2m 3.2m 0.8m <Super 35mm Film> F14 X 0.7m 3.2m 0.4m 0.3m 0.8m <4/3 inch> X 0.9m F11 0.6m 0.4m 3.2m 0.8m <2/3 inch> X 2.0m F5.6 1.6m 0.6m 3.2m 0.8m <1/2 inch> X F4 3.5m 3.2m 3.2m 0.8m 0.8m <1/3 inch> F3 X 7.5m 3.2m 9.2m 0.9m 0.8m Depth of Field Return 1-3-1.Flange Back Adjustment Image Plain 5m Lens Mount 0.1mm 13.2mm 5m 1.3m Flange focal Length 5m 2.8m 25mm 50mm 5m 4.2m Focal Length Distance 5m Flange Back 0.1mm error 50mm : 0.8m error (16%) Return 25mm : 3.2m error (64%) 1-3-2.What happens if Flange Back is not correct? Lens Mount Image Plane Flange Focal (CCD or Film) Distance L f m Δm d 5m M’ = m + Δm • If an object is 5m far from a camera and an error is as follows, – error 0.1mm 0.05mm 0.01mm 0.005mm – Focal length = 100mm : 4.78m 4.9m 5.0m 5.0m – Focal length = 50mm : 4.2m 4.6m 4.9m 4.9m – Focal length = 25mm : 2.8m 3.6m 4.6m 4.8m – Focal length = 13.2mm : 1.3m 2.1m 3.9m 4.4m – Focal length = 7.2mm : 0.48m 0.87m 2.6m 3.4m Distance scale becomes incorrect. Even if you set lens scale to 5m, the actual focus is not 5m. Return 1-3-3.Flange Focal Distance Tolerance Lens Mount Image Plane Flange Focal (CCD or Film) Distance L f m Δm d M’ = m + Δm 1/3” Distance = 5m Iris F5.6 F4 F2.8 F2 F1.6 7.2mm Flange Back Error 0.03mm 0.022mm 0.015mm 0.011mm 0.009mm 2/3” Distance = 5m Iris F11 F8 F5.6 F4 F3.2 13.2mm Flange Back Error 0.11mm 0.08mm 0.056mm 0.04mm 0.032mm 4/3” Distance = 5m Iris F22 F16 F11 F8 F5.6 25mm Flange Back Error 0.41mm 0.30mm 0.21mm 0.15mm 0.11mm 35mmm Full Frame Distance = 5m Iris F44 F31 F22 F16 F11 F5.6 50mm Flange Back Error 1.7mm 1.2mm 0.86mm 0.62mm 0.42mm 0.21mm Size 1/2, then accuracy: 1/4 Return The wider the lens becomes, the more precise the flange back adjustment requires. 2. Why bokeh (Diffraction) occurs? • Fraunhofer Diffraction • d=1.22 λF (Airy Disk Radius) λ : wavelength of light, – F: F Stop (= f/N: N = Effective Diameter) – If the wavelength is the same, bokeh is determined by F Stop only. Airy Disc Return F Stop • F = f/N (f: Focal Length, N: Effective Diameter of Iris) – The bigger F number becomes, the shorter the diameter of iris becomes. – The shorter focal length becomes, the shorter the diameter of iris becomes under the condition of the same F stop Close (Deep) Focal Point Focal Point N: Effective F Stop F4 Diameter IRIS N F4 f =50mm N f =25mm N=12.5mm N=6.25mm IRIS F2.8 F Stop F2.8 N=17.9mm N=8.9mm F Stop: F2 F Stop: F2 f=50mm f=25mm N=25mm N=12.5mm OPEN (Shallow) Return 2-2. the circle of confusion diameter limit • CoC (Circle of Confusion) is d/(1000~1500) where d is the diagonal measure of the original image. • For examples, if CoC = d/1101, the resolution of imager becomes 1080 TV Line (2 x 1101/2.04). Airy Disc Diameter d =6mm 2 x 1.22 F CoC=0.00545mm 0.0107mm Airy Disc 16:9 Imager F8 F4.1 F4 Airy Disc F2 Diameter F2 CoC 0.00268mm CoC 0.00537mm CoC > Airy Disc Diameter CoC < Airy Disc Diameter You cannot distinguish 2 1101 lines You can distinguish 2 lines. lines. If aspect ratio is 6:9, the height becomes If F Stop becomes bigger, Airy Disc d/2.04. So the vertical resolution becomes bigger. Finally 2d=2.44λF) becomes 1080TV Line. becomes the same as CoC, you cannot distinguish 2 lines。 Return 2-2-2. Comparison by Imager Size • d=2.44 λF(Airy Disk Diameter) F16 F4 F11 F2 F2.8 F5.6 F8 0.0215mm 0.00537mm 0.00752mm 0.0107mm 0.0148mm 0.00268mm 0.00376mm F11 F11 F22 F22 F5.6 F5.6 F4 F4 F8 F8 1/3” CoC=0.00545mm 2/3” F16 F16 CoC=0.00999mm 4/3” Return CoC=0.0185mm 2-2-3. Comparison by Imager Size • d=2.44 λF(Airy Disk Diameter) F44 F11 F32 F5.6 F8 F16 F22 0.0590mm 0.0148mm 0.0215mm 0.0295mm 0.0429mm 0.00349mm 0.0107mm F44 F44 F32 F32 F32 F32 F22 F22 F22 F22 F32 F32 APS-C ANSI CoC=0.0244mm Super35 CoC=0.0260mm 35mm Full Frame CoC=0.0375mm Return Comparison by TV Format (PAL) • d=2.44 λF(Airy Disk Diameter) F4 F11 F16 F2 F2.8 F5.6 F8 5.4μ 7.5μ 10.7μ 14.8μ 21.5μ 2.7μ 3.8μ F5.6F11F8F4 F11F8F4 F22F16F2.8F2 F5.6F16F22F2.8F2 14.810.77.55.4μμ 14.810.77.55.4μμ 2/3” CoC=19μ at PAL Return Comparison by TV Format (720p) • d=2.44 λF(Airy Disk Diameter) F4 F11 F16 F2 F2.8 F5.6 F8 5.4μ 7.5μ 10.7μ 14.8μ 21.5μ 2.7μ 3.8μ F8F4 F8F4 F5.6F16F11F2.8F2 F5.6F16F11F2.8F2 10.75.47.5μμ 10.77.55.4μμ 2/3” CoC=15μ at 720p Return Comparison by TV Format (1080i) • d=2.44 λF(Airy Disk Diameter) 2/3” F4 F8 F11 F16 F2 F2.8 F5.6 5.4μ 7.5μ 10.7μ 14.8μ 21.5μ 2.7μ 3.8μ F8F4 F8F4 F5.6F11F2.8F2 F5.6F11F2.8F2 10.77.55.4μμμ 10.77.55.4μμμ 2/3” CoC=10 at 1080i Return Comparison by TV Format (1080i) • d=2.44 λF(Airy Disk Diameter) 1/3” F4 F8 F11 F16 F2 F2.8 F5.6 5.4μ 7.5μ 10.7μ 14.8μ 21.5μ 2.7μ 3.8μ F4 F4 F5.6F2.8F8F2 F5.6 F2.8F8F2 10.77.55.4μμ 10.7 7.55.4μμ 1/3” CoC=5.5 at 1080i Return Comparison by TV Format (1080i) • d=2.44 λF(Airy Disk Diameter) 1/4” F4 F8 F11 F16 F2 F2.8 F5.6 5.4μ 7.5μ 10.7μ 14.8μ 21.5μ 2.7μ 3.8μ F4 F4 F5.6F2.8F2F5.6 F2.8 F2 7.55.4μ 7.55.4μ 1/4” CoC=4.1 at 1080i Comparison by TV Format (1080i) 4/3” • d=2.44 λF(Airy Disk Diameter) F11 F32 F44 F5.6 F8 F16 F22 14.8μ 21.5μ 29.5μ 42.9μ 59.0μ 7.5μ 10.7μ F32F22F11 F32F22F11 F16F8F44 F16F44F8 42.929.521.514.8μμ 42.921.514.829.5μμ 4/3” CoC=42μ at 1080i Return 2-3.Airy Disc Calculation In case of F11: – Purple 380–450 nm d=0.005~0.006mm Diameter =0.01~0.012mm – Blue 450–495 nm d=0.006~0.0066mm Diameter =0.012~0.013mm – Green 495–570 nm d=0.0066~0.0076mm Diameter =0.013~0.015mm – Yellow 570–590 nm d=0.0076~0.0079mm Diameter =0.015~0.016mm – Orange 590–620 nm d=0.0079~0.0083mm Diameter =0.016~0.017mm – Red 620–750 nm d=0.0083~0.010mm Diameter =0.017~0.020mm • Use 550nm in the middle of wavelength as the value varies by wavelength.
Recommended publications
  • About Raspberry Pi HQ Camera Lenses Created by Dylan Herrada
    All About Raspberry Pi HQ Camera Lenses Created by Dylan Herrada Last updated on 2020-10-19 07:56:39 PM EDT Overview In this guide, I'll explain the 3 main lens options for a Raspberry Pi HQ Camera. I do have a few years of experience as a video engineer and I also have a decent amount of experience using cameras with relatively small sensors (mainly mirrorless cinema cameras like the BMPCC) so I am very aware of a lot of the advantages and challenges associated. That being said, I am by no means an expert, so apologies in advance if I get anything wrong. Parts Discussed Raspberry Pi High Quality HQ Camera $50.00 IN STOCK Add To Cart © Adafruit Industries https://learn.adafruit.com/raspberry-pi-hq-camera-lenses Page 3 of 13 16mm 10MP Telephoto Lens for Raspberry Pi HQ Camera OUT OF STOCK Out Of Stock 6mm 3MP Wide Angle Lens for Raspberry Pi HQ Camera OUT OF STOCK Out Of Stock Raspberry Pi 3 - Model B+ - 1.4GHz Cortex-A53 with 1GB RAM $35.00 IN STOCK Add To Cart Raspberry Pi Zero WH (Zero W with Headers) $14.00 IN STOCK Add To Cart © Adafruit Industries https://learn.adafruit.com/raspberry-pi-hq-camera-lenses Page 4 of 13 © Adafruit Industries https://learn.adafruit.com/raspberry-pi-hq-camera-lenses Page 5 of 13 Crop Factor What is crop factor? According to Wikipedia (https://adafru.it/MF0): In digital photography, the crop factor, format factor, or focal length multiplier of an image sensor format is the ratio of the dimensions of a camera's imaging area compared to a reference format; most often, this term is applied to digital cameras, relative to 35 mm film format as a reference.
    [Show full text]
  • “Digital Single Lens Reflex”
    PHOTOGRAPHY GENERIC ELECTIVE SEM-II DSLR stands for “Digital Single Lens Reflex”. In simple language, a DSLR is a digital camera that uses a mirror mechanism to either reflect light from a camera lens to an optical viewfinder (which is an eyepiece on the back of the camera that one looks through to see what they are taking a picture of) or let light fully pass onto the image sensor (which captures the image) by moving the mirror out of the way. Although single lens reflex cameras have been available in various shapes and forms since the 19th century with film as the recording medium, the first commercial digital SLR with an image sensor appeared in 1991. Compared to point-and-shoot and phone cameras, DSLR cameras typically use interchangeable lenses. Take a look at the following image of an SLR cross section (image courtesy of Wikipedia): When you look through a DSLR viewfinder / eyepiece on the back of the camera, whatever you see is passed through the lens attached to the camera, which means that you could be looking at exactly what you are going to capture. Light from the scene you are attempting to capture passes through the lens into a reflex mirror (#2) that sits at a 45 degree angle inside the camera chamber, which then forwards the light vertically to an optical element called a “pentaprism” (#7). The pentaprism then converts the vertical light to horizontal by redirecting the light through two separate mirrors, right into the viewfinder (#8). When you take a picture, the reflex mirror (#2) swings upwards, blocking the vertical pathway and letting the light directly through.
    [Show full text]
  • Panoramas Shoot with the Camera Positioned Vertically As This Will Give the Photo Merging Software More Wriggle-Room in Merging the Images
    P a n o r a m a s What is a Panorama? A panoramic photo covers a larger field of view than a “normal” photograph. In general if the aspect ratio is 2 to 1 or greater then it’s classified as a panoramic photo. This sample is about 3 times wider than tall, an aspect ratio of 3 to 1. What is a Panorama? A panorama is not limited to horizontal shots only. Vertical images are also an option. How is a Panorama Made? Panoramic photos are created by taking a series of overlapping photos and merging them together using software. Why Not Just Crop a Photo? • Making a panorama by cropping deletes a lot of data from the image. • That’s not a problem if you are just going to view it in a small format or at a low resolution. • However, if you want to print the image in a large format the loss of data will limit the size and quality that can be made. Get a Really Wide Angle Lens? • A wide-angle lens still may not be wide enough to capture the whole scene in a single shot. Sometime you just can’t get back far enough. • Photos taken with a wide-angle lens can exhibit undesirable lens distortion. • Lens cost, an auto focus 14mm f/2.8 lens can set you back $1,800 plus. What Lens to Use? • A standard lens works very well for taking panoramic photos. • You get minimal lens distortion, resulting in more realistic panoramic photos. • Choose a lens or focal length on a zoom lens of between 35mm and 80mm.
    [Show full text]
  • Depth-Aware Blending of Smoothed Images for Bokeh Effect Generation
    1 Depth-aware Blending of Smoothed Images for Bokeh Effect Generation Saikat Duttaa,∗∗ aIndian Institute of Technology Madras, Chennai, PIN-600036, India ABSTRACT Bokeh effect is used in photography to capture images where the closer objects look sharp and every- thing else stays out-of-focus. Bokeh photos are generally captured using Single Lens Reflex cameras using shallow depth-of-field. Most of the modern smartphones can take bokeh images by leveraging dual rear cameras or a good auto-focus hardware. However, for smartphones with single-rear camera without a good auto-focus hardware, we have to rely on software to generate bokeh images. This kind of system is also useful to generate bokeh effect in already captured images. In this paper, an end-to-end deep learning framework is proposed to generate high-quality bokeh effect from images. The original image and different versions of smoothed images are blended to generate Bokeh effect with the help of a monocular depth estimation network. The proposed approach is compared against a saliency detection based baseline and a number of approaches proposed in AIM 2019 Challenge on Bokeh Effect Synthesis. Extensive experiments are shown in order to understand different parts of the proposed algorithm. The network is lightweight and can process an HD image in 0.03 seconds. This approach ranked second in AIM 2019 Bokeh effect challenge-Perceptual Track. 1. Introduction tant problem in Computer Vision and has gained attention re- cently. Most of the existing approaches(Shen et al., 2016; Wad- Depth-of-field effect or Bokeh effect is often used in photog- hwa et al., 2018; Xu et al., 2018) work on human portraits by raphy to generate aesthetic pictures.
    [Show full text]
  • Chapter 3 (Aberrations)
    Chapter 3 Aberrations 3.1 Introduction In Chap. 2 we discussed the image-forming characteristics of optical systems, but we limited our consideration to an infinitesimal thread- like region about the optical axis called the paraxial region. In this chapter we will consider, in general terms, the behavior of lenses with finite apertures and fields of view. It has been pointed out that well- corrected optical systems behave nearly according to the rules of paraxial imagery given in Chap. 2. This is another way of stating that a lens without aberrations forms an image of the size and in the loca- tion given by the equations for the paraxial or first-order region. We shall measure the aberrations by the amount by which rays miss the paraxial image point. It can be seen that aberrations may be determined by calculating the location of the paraxial image of an object point and then tracing a large number of rays (by the exact trigonometrical ray-tracing equa- tions of Chap. 10) to determine the amounts by which the rays depart from the paraxial image point. Stated this baldly, the mathematical determination of the aberrations of a lens which covered any reason- able field at a real aperture would seem a formidable task, involving an almost infinite amount of labor. However, by classifying the various types of image faults and by understanding the behavior of each type, the work of determining the aberrations of a lens system can be sim- plified greatly, since only a few rays need be traced to evaluate each aberration; thus the problem assumes more manageable proportions.
    [Show full text]
  • Portraiture, Surveillance, and the Continuity Aesthetic of Blur
    Michigan Technological University Digital Commons @ Michigan Tech Michigan Tech Publications 6-22-2021 Portraiture, Surveillance, and the Continuity Aesthetic of Blur Stefka Hristova Michigan Technological University, [email protected] Follow this and additional works at: https://digitalcommons.mtu.edu/michigantech-p Part of the Arts and Humanities Commons Recommended Citation Hristova, S. (2021). Portraiture, Surveillance, and the Continuity Aesthetic of Blur. Frames Cinema Journal, 18, 59-98. http://doi.org/10.15664/fcj.v18i1.2249 Retrieved from: https://digitalcommons.mtu.edu/michigantech-p/15062 Follow this and additional works at: https://digitalcommons.mtu.edu/michigantech-p Part of the Arts and Humanities Commons Portraiture, Surveillance, and the Continuity Aesthetic of Blur Stefka Hristova DOI:10.15664/fcj.v18i1.2249 Frames Cinema Journal ISSN 2053–8812 Issue 18 (Jun 2021) http://www.framescinemajournal.com Frames Cinema Journal, Issue 18 (June 2021) Portraiture, Surveillance, and the Continuity Aesthetic of Blur Stefka Hristova Introduction With the increasing transformation of photography away from a camera-based analogue image-making process into a computerised set of procedures, the ontology of the photographic image has been challenged. Portraits in particular have become reconfigured into what Mark B. Hansen has called “digital facial images” and Mitra Azar has subsequently reworked into “algorithmic facial images.” 1 This transition has amplified the role of portraiture as a representational device, as a node in a network
    [Show full text]
  • Ground-Based Photographic Monitoring
    United States Department of Agriculture Ground-Based Forest Service Pacific Northwest Research Station Photographic General Technical Report PNW-GTR-503 Monitoring May 2001 Frederick C. Hall Author Frederick C. Hall is senior plant ecologist, U.S. Department of Agriculture, Forest Service, Pacific Northwest Region, Natural Resources, P.O. Box 3623, Portland, Oregon 97208-3623. Paper prepared in cooperation with the Pacific Northwest Region. Abstract Hall, Frederick C. 2001 Ground-based photographic monitoring. Gen. Tech. Rep. PNW-GTR-503. Portland, OR: U.S. Department of Agriculture, Forest Service, Pacific Northwest Research Station. 340 p. Land management professionals (foresters, wildlife biologists, range managers, and land managers such as ranchers and forest land owners) often have need to evaluate their management activities. Photographic monitoring is a fast, simple, and effective way to determine if changes made to an area have been successful. Ground-based photo monitoring means using photographs taken at a specific site to monitor conditions or change. It may be divided into two systems: (1) comparison photos, whereby a photograph is used to compare a known condition with field conditions to estimate some parameter of the field condition; and (2) repeat photo- graphs, whereby several pictures are taken of the same tract of ground over time to detect change. Comparison systems deal with fuel loading, herbage utilization, and public reaction to scenery. Repeat photography is discussed in relation to land- scape, remote, and site-specific systems. Critical attributes of repeat photography are (1) maps to find the sampling location and of the photo monitoring layout; (2) documentation of the monitoring system to include purpose, camera and film, w e a t h e r, season, sampling technique, and equipment; and (3) precise replication of photographs.
    [Show full text]
  • Depth of Field Lenses Form Images of Objects a Predictable Distance Away from the Lens. the Distance from the Image to the Lens Is the Image Distance
    Depth of Field Lenses form images of objects a predictable distance away from the lens. The distance from the image to the lens is the image distance. Image distance depends on the object distance (distance from object to the lens) and the focal length of the lens. Figure 1 shows how the image distance depends on object distance for lenses with focal lengths of 35 mm and 200 mm. Figure 1: Dependence Of Image Distance Upon Object Distance Cameras use lenses to focus the images of object upon the film or exposure medium. Objects within a photographic Figure 2 scene are usually a varying distance from the lens. Because a lens is capable of precisely focusing objects of a single distance, some objects will be precisely focused while others will be out of focus and even blurred. Skilled photographers strive to maximize the depth of field within their photographs. Depth of field refers to the distance between the nearest and the farthest objects within a photographic scene that are acceptably focused. Figure 2 is an example of a photograph with a shallow depth of field. One variable that affects depth of field is the f-number. The f-number is the ratio of the focal length to the diameter of the aperture. The aperture is the circular opening through which light travels before reaching the lens. Table 1 shows the dependence of the depth of field (DOF) upon the f-number of a digital camera. Table 1: Dependence of Depth of Field Upon f-Number and Camera Lens 35-mm Camera Lens 200-mm Camera Lens f-Number DN (m) DF (m) DOF (m) DN (m) DF (m) DOF (m) 2.8 4.11 6.39 2.29 4.97 5.03 0.06 4.0 3.82 7.23 3.39 4.95 5.05 0.10 5.6 3.48 8.86 5.38 4.94 5.07 0.13 8.0 3.09 13.02 9.93 4.91 5.09 0.18 22.0 1.82 Infinity Infinite 4.775 5.27 0.52 The DN value represents the nearest object distance that is acceptably focused.
    [Show full text]
  • What's a Megapixel Lens and Why Would You Need One?
    Theia Technologies white paper Page 1 of 3 What's a Megapixel Lens and Why Would You Need One? It's an exciting time in the security department. You've finally received approval to migrate from your installed analog cameras to new megapixel models and Also translated in Russian: expectations are high. As you get together with your integrator, you start selecting Что такое мегапиксельный the cameras you plan to install, looking forward to getting higher quality, higher объектив и для чего он resolution images and, in many cases, covering the same amount of ground with нужен one camera that, otherwise, would have taken several analog models. You're covering the basics of those megapixel cameras, including the housing and mounting hardware. What about the lens? You've just opened up Pandora's Box. A Lens Is Not a Lens Is Not a Lens The lens needed for an IP/megapixel camera is much different than the lens needed for a traditional analog camera. These higher resolution cameras demand higher quality lenses. For instance, in a megapixel camera, the focal plane spot size of the lens must be comparable or smaller than the pixel size on the sensor (Figures 1 and 2). To do this, more elements, and higher precision elements, are required for megapixel camera lenses which can make them more costly than their analog counterparts. Figure 1 Figure 2 Spot size of a megapixel lens is much smaller, required Spot size of a standard lens won't allow sharp focus on for good focus on megapixel sensors. a megapixel sensor.
    [Show full text]
  • EVERYDAY MAGIC Bokeh
    EVERYDAY MAGIC Bokeh “Our goal should be to perceive the extraordinary in the ordinary, and when we get good enough, to live vice versa, in the ordinary extraordinary.” ~ Eric Booth Welcome to Lesson Two of Everyday Magic. In this week’s lesson we are going to dig deep into those magical little orbs of light in a photograph known as bokeh. Pronounced BOH-Kə (or BOH-kay), the word “bokeh” is an English translation of the Japanese word boke, which means “blur” or “haze”. What is Bokeh? bokeh. And it is the camera lens and how it renders the out of focus light in Photographically speaking, bokeh is the background that gives bokeh its defined as the aesthetic quality of the more or less circular appearance. blur produced by the camera lens in the out-of-focus parts of an image. But what makes this unique visual experience seem magical is the fact that we are not able to ‘see’ bokeh with our superior human vision and excellent depth of field. Bokeh is totally a function ‘seeing’ through the lens. Playing with Focal Distance In addition to a shallow depth of field, the bokeh in an image is also determined by 1) the distance between the subject and the background and 2) the distance between the lens and the subject. Depending on how you Bokeh and Depth of Field compose your image, the bokeh can be smooth and ‘creamy’ in appearance or it The key to achieving beautiful bokeh in can be livelier and more energetic. your images is by shooting with a shallow depth of field (DOF) which is the amount of an image which is appears acceptably sharp.
    [Show full text]
  • Using Depth Mapping to Realize Bokeh Effect with a Single Camera Android Device EE368 Project Report Authors (SCPD Students): Jie Gong, Ran Liu, Pradeep Vukkadala
    Using Depth Mapping to realize Bokeh effect with a single camera Android device EE368 Project Report Authors (SCPD students): Jie Gong, Ran Liu, Pradeep Vukkadala Abstract- In this paper we seek to produce a bokeh Bokeh effect is usually achieved in high end SLR effect with a single image taken from an Android device cameras using portrait lenses that are relatively large in size by post processing. Depth mapping is the core of Bokeh and have a shallow depth of field. It is extremely difficult effect production. A depth map is an estimate of depth to achieve the same effect (physically) in smart phones at each pixel in the photo which can be used to identify which have miniaturized camera lenses and sensors. portions of the image that are far away and belong to However, the latest iPhone 7 has a portrait mode which can the background and therefore apply a digital blur to the produce Bokeh effect thanks to the dual cameras background. We present algorithms to determine the configuration. To compete with iPhone 7, Google recently defocus map from a single input image. We obtain a also announced that the latest Google Pixel Phone can take sparse defocus map by calculating the ratio of gradients photos with Bokeh effect, which would be achieved by from original image and reblured image. Then, full taking 2 photos at different depths to camera and defocus map is obtained by propagating values from combining then via software. There is a gap that neither of edges to entire image by using nearest neighbor method two biggest players can achieve Bokeh effect only using a and matting Laplacian.
    [Show full text]
  • Carl Zeiss Oberkochen Large Format Lenses 1950-1972
    Large format lenses from Carl Zeiss Oberkochen 1950-1972 © 2013-2019 Arne Cröll – All Rights Reserved (this version is from October 4, 2019) Carl Zeiss Jena and Carl Zeiss Oberkochen Before and during WWII, the Carl Zeiss company in Jena was one of the largest optics manufacturers in Germany. They produced a variety of lenses suitable for large format (LF) photography, including the well- known Tessars and Protars in several series, but also process lenses and aerial lenses. The Zeiss-Ikon sister company in Dresden manufactured a range of large format cameras, such as the Zeiss “Ideal”, “Maximar”, Tropen-Adoro”, and “Juwel” (Jewel); the latter camera, in the 3¼” x 4¼” size, was used by Ansel Adams for some time. At the end of World War II, the German state of Thuringia, where Jena is located, was under the control of British and American troops. However, the Yalta Conference agreement placed it under Soviet control shortly thereafter. Just before the US command handed the administration of Thuringia over to the Soviet Army, American troops moved a considerable part of the leading management and research staff of Carl Zeiss Jena and the sister company Schott glass to Heidenheim near Stuttgart, 126 people in all [1]. They immediately started to look for a suitable place for a new factory and found it in the small town of Oberkochen, just 20km from Heidenheim. This led to the foundation of the company “Opton Optische Werke” in Oberkochen, West Germany, on Oct. 30, 1946, initially as a full subsidiary of the original factory in Jena.
    [Show full text]