AG-AF100 28Mm Wide Lens
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
About Raspberry Pi HQ Camera Lenses Created by Dylan Herrada
All About Raspberry Pi HQ Camera Lenses Created by Dylan Herrada Last updated on 2020-10-19 07:56:39 PM EDT Overview In this guide, I'll explain the 3 main lens options for a Raspberry Pi HQ Camera. I do have a few years of experience as a video engineer and I also have a decent amount of experience using cameras with relatively small sensors (mainly mirrorless cinema cameras like the BMPCC) so I am very aware of a lot of the advantages and challenges associated. That being said, I am by no means an expert, so apologies in advance if I get anything wrong. Parts Discussed Raspberry Pi High Quality HQ Camera $50.00 IN STOCK Add To Cart © Adafruit Industries https://learn.adafruit.com/raspberry-pi-hq-camera-lenses Page 3 of 13 16mm 10MP Telephoto Lens for Raspberry Pi HQ Camera OUT OF STOCK Out Of Stock 6mm 3MP Wide Angle Lens for Raspberry Pi HQ Camera OUT OF STOCK Out Of Stock Raspberry Pi 3 - Model B+ - 1.4GHz Cortex-A53 with 1GB RAM $35.00 IN STOCK Add To Cart Raspberry Pi Zero WH (Zero W with Headers) $14.00 IN STOCK Add To Cart © Adafruit Industries https://learn.adafruit.com/raspberry-pi-hq-camera-lenses Page 4 of 13 © Adafruit Industries https://learn.adafruit.com/raspberry-pi-hq-camera-lenses Page 5 of 13 Crop Factor What is crop factor? According to Wikipedia (https://adafru.it/MF0): In digital photography, the crop factor, format factor, or focal length multiplier of an image sensor format is the ratio of the dimensions of a camera's imaging area compared to a reference format; most often, this term is applied to digital cameras, relative to 35 mm film format as a reference. -
“Digital Single Lens Reflex”
PHOTOGRAPHY GENERIC ELECTIVE SEM-II DSLR stands for “Digital Single Lens Reflex”. In simple language, a DSLR is a digital camera that uses a mirror mechanism to either reflect light from a camera lens to an optical viewfinder (which is an eyepiece on the back of the camera that one looks through to see what they are taking a picture of) or let light fully pass onto the image sensor (which captures the image) by moving the mirror out of the way. Although single lens reflex cameras have been available in various shapes and forms since the 19th century with film as the recording medium, the first commercial digital SLR with an image sensor appeared in 1991. Compared to point-and-shoot and phone cameras, DSLR cameras typically use interchangeable lenses. Take a look at the following image of an SLR cross section (image courtesy of Wikipedia): When you look through a DSLR viewfinder / eyepiece on the back of the camera, whatever you see is passed through the lens attached to the camera, which means that you could be looking at exactly what you are going to capture. Light from the scene you are attempting to capture passes through the lens into a reflex mirror (#2) that sits at a 45 degree angle inside the camera chamber, which then forwards the light vertically to an optical element called a “pentaprism” (#7). The pentaprism then converts the vertical light to horizontal by redirecting the light through two separate mirrors, right into the viewfinder (#8). When you take a picture, the reflex mirror (#2) swings upwards, blocking the vertical pathway and letting the light directly through. -
Panoramas Shoot with the Camera Positioned Vertically As This Will Give the Photo Merging Software More Wriggle-Room in Merging the Images
P a n o r a m a s What is a Panorama? A panoramic photo covers a larger field of view than a “normal” photograph. In general if the aspect ratio is 2 to 1 or greater then it’s classified as a panoramic photo. This sample is about 3 times wider than tall, an aspect ratio of 3 to 1. What is a Panorama? A panorama is not limited to horizontal shots only. Vertical images are also an option. How is a Panorama Made? Panoramic photos are created by taking a series of overlapping photos and merging them together using software. Why Not Just Crop a Photo? • Making a panorama by cropping deletes a lot of data from the image. • That’s not a problem if you are just going to view it in a small format or at a low resolution. • However, if you want to print the image in a large format the loss of data will limit the size and quality that can be made. Get a Really Wide Angle Lens? • A wide-angle lens still may not be wide enough to capture the whole scene in a single shot. Sometime you just can’t get back far enough. • Photos taken with a wide-angle lens can exhibit undesirable lens distortion. • Lens cost, an auto focus 14mm f/2.8 lens can set you back $1,800 plus. What Lens to Use? • A standard lens works very well for taking panoramic photos. • You get minimal lens distortion, resulting in more realistic panoramic photos. • Choose a lens or focal length on a zoom lens of between 35mm and 80mm. -
Depth-Aware Blending of Smoothed Images for Bokeh Effect Generation
1 Depth-aware Blending of Smoothed Images for Bokeh Effect Generation Saikat Duttaa,∗∗ aIndian Institute of Technology Madras, Chennai, PIN-600036, India ABSTRACT Bokeh effect is used in photography to capture images where the closer objects look sharp and every- thing else stays out-of-focus. Bokeh photos are generally captured using Single Lens Reflex cameras using shallow depth-of-field. Most of the modern smartphones can take bokeh images by leveraging dual rear cameras or a good auto-focus hardware. However, for smartphones with single-rear camera without a good auto-focus hardware, we have to rely on software to generate bokeh images. This kind of system is also useful to generate bokeh effect in already captured images. In this paper, an end-to-end deep learning framework is proposed to generate high-quality bokeh effect from images. The original image and different versions of smoothed images are blended to generate Bokeh effect with the help of a monocular depth estimation network. The proposed approach is compared against a saliency detection based baseline and a number of approaches proposed in AIM 2019 Challenge on Bokeh Effect Synthesis. Extensive experiments are shown in order to understand different parts of the proposed algorithm. The network is lightweight and can process an HD image in 0.03 seconds. This approach ranked second in AIM 2019 Bokeh effect challenge-Perceptual Track. 1. Introduction tant problem in Computer Vision and has gained attention re- cently. Most of the existing approaches(Shen et al., 2016; Wad- Depth-of-field effect or Bokeh effect is often used in photog- hwa et al., 2018; Xu et al., 2018) work on human portraits by raphy to generate aesthetic pictures. -
Chapter 3 (Aberrations)
Chapter 3 Aberrations 3.1 Introduction In Chap. 2 we discussed the image-forming characteristics of optical systems, but we limited our consideration to an infinitesimal thread- like region about the optical axis called the paraxial region. In this chapter we will consider, in general terms, the behavior of lenses with finite apertures and fields of view. It has been pointed out that well- corrected optical systems behave nearly according to the rules of paraxial imagery given in Chap. 2. This is another way of stating that a lens without aberrations forms an image of the size and in the loca- tion given by the equations for the paraxial or first-order region. We shall measure the aberrations by the amount by which rays miss the paraxial image point. It can be seen that aberrations may be determined by calculating the location of the paraxial image of an object point and then tracing a large number of rays (by the exact trigonometrical ray-tracing equa- tions of Chap. 10) to determine the amounts by which the rays depart from the paraxial image point. Stated this baldly, the mathematical determination of the aberrations of a lens which covered any reason- able field at a real aperture would seem a formidable task, involving an almost infinite amount of labor. However, by classifying the various types of image faults and by understanding the behavior of each type, the work of determining the aberrations of a lens system can be sim- plified greatly, since only a few rays need be traced to evaluate each aberration; thus the problem assumes more manageable proportions. -
Portraiture, Surveillance, and the Continuity Aesthetic of Blur
Michigan Technological University Digital Commons @ Michigan Tech Michigan Tech Publications 6-22-2021 Portraiture, Surveillance, and the Continuity Aesthetic of Blur Stefka Hristova Michigan Technological University, [email protected] Follow this and additional works at: https://digitalcommons.mtu.edu/michigantech-p Part of the Arts and Humanities Commons Recommended Citation Hristova, S. (2021). Portraiture, Surveillance, and the Continuity Aesthetic of Blur. Frames Cinema Journal, 18, 59-98. http://doi.org/10.15664/fcj.v18i1.2249 Retrieved from: https://digitalcommons.mtu.edu/michigantech-p/15062 Follow this and additional works at: https://digitalcommons.mtu.edu/michigantech-p Part of the Arts and Humanities Commons Portraiture, Surveillance, and the Continuity Aesthetic of Blur Stefka Hristova DOI:10.15664/fcj.v18i1.2249 Frames Cinema Journal ISSN 2053–8812 Issue 18 (Jun 2021) http://www.framescinemajournal.com Frames Cinema Journal, Issue 18 (June 2021) Portraiture, Surveillance, and the Continuity Aesthetic of Blur Stefka Hristova Introduction With the increasing transformation of photography away from a camera-based analogue image-making process into a computerised set of procedures, the ontology of the photographic image has been challenged. Portraits in particular have become reconfigured into what Mark B. Hansen has called “digital facial images” and Mitra Azar has subsequently reworked into “algorithmic facial images.” 1 This transition has amplified the role of portraiture as a representational device, as a node in a network -
Ground-Based Photographic Monitoring
United States Department of Agriculture Ground-Based Forest Service Pacific Northwest Research Station Photographic General Technical Report PNW-GTR-503 Monitoring May 2001 Frederick C. Hall Author Frederick C. Hall is senior plant ecologist, U.S. Department of Agriculture, Forest Service, Pacific Northwest Region, Natural Resources, P.O. Box 3623, Portland, Oregon 97208-3623. Paper prepared in cooperation with the Pacific Northwest Region. Abstract Hall, Frederick C. 2001 Ground-based photographic monitoring. Gen. Tech. Rep. PNW-GTR-503. Portland, OR: U.S. Department of Agriculture, Forest Service, Pacific Northwest Research Station. 340 p. Land management professionals (foresters, wildlife biologists, range managers, and land managers such as ranchers and forest land owners) often have need to evaluate their management activities. Photographic monitoring is a fast, simple, and effective way to determine if changes made to an area have been successful. Ground-based photo monitoring means using photographs taken at a specific site to monitor conditions or change. It may be divided into two systems: (1) comparison photos, whereby a photograph is used to compare a known condition with field conditions to estimate some parameter of the field condition; and (2) repeat photo- graphs, whereby several pictures are taken of the same tract of ground over time to detect change. Comparison systems deal with fuel loading, herbage utilization, and public reaction to scenery. Repeat photography is discussed in relation to land- scape, remote, and site-specific systems. Critical attributes of repeat photography are (1) maps to find the sampling location and of the photo monitoring layout; (2) documentation of the monitoring system to include purpose, camera and film, w e a t h e r, season, sampling technique, and equipment; and (3) precise replication of photographs. -
Depth of Field Lenses Form Images of Objects a Predictable Distance Away from the Lens. the Distance from the Image to the Lens Is the Image Distance
Depth of Field Lenses form images of objects a predictable distance away from the lens. The distance from the image to the lens is the image distance. Image distance depends on the object distance (distance from object to the lens) and the focal length of the lens. Figure 1 shows how the image distance depends on object distance for lenses with focal lengths of 35 mm and 200 mm. Figure 1: Dependence Of Image Distance Upon Object Distance Cameras use lenses to focus the images of object upon the film or exposure medium. Objects within a photographic Figure 2 scene are usually a varying distance from the lens. Because a lens is capable of precisely focusing objects of a single distance, some objects will be precisely focused while others will be out of focus and even blurred. Skilled photographers strive to maximize the depth of field within their photographs. Depth of field refers to the distance between the nearest and the farthest objects within a photographic scene that are acceptably focused. Figure 2 is an example of a photograph with a shallow depth of field. One variable that affects depth of field is the f-number. The f-number is the ratio of the focal length to the diameter of the aperture. The aperture is the circular opening through which light travels before reaching the lens. Table 1 shows the dependence of the depth of field (DOF) upon the f-number of a digital camera. Table 1: Dependence of Depth of Field Upon f-Number and Camera Lens 35-mm Camera Lens 200-mm Camera Lens f-Number DN (m) DF (m) DOF (m) DN (m) DF (m) DOF (m) 2.8 4.11 6.39 2.29 4.97 5.03 0.06 4.0 3.82 7.23 3.39 4.95 5.05 0.10 5.6 3.48 8.86 5.38 4.94 5.07 0.13 8.0 3.09 13.02 9.93 4.91 5.09 0.18 22.0 1.82 Infinity Infinite 4.775 5.27 0.52 The DN value represents the nearest object distance that is acceptably focused. -
What's a Megapixel Lens and Why Would You Need One?
Theia Technologies white paper Page 1 of 3 What's a Megapixel Lens and Why Would You Need One? It's an exciting time in the security department. You've finally received approval to migrate from your installed analog cameras to new megapixel models and Also translated in Russian: expectations are high. As you get together with your integrator, you start selecting Что такое мегапиксельный the cameras you plan to install, looking forward to getting higher quality, higher объектив и для чего он resolution images and, in many cases, covering the same amount of ground with нужен one camera that, otherwise, would have taken several analog models. You're covering the basics of those megapixel cameras, including the housing and mounting hardware. What about the lens? You've just opened up Pandora's Box. A Lens Is Not a Lens Is Not a Lens The lens needed for an IP/megapixel camera is much different than the lens needed for a traditional analog camera. These higher resolution cameras demand higher quality lenses. For instance, in a megapixel camera, the focal plane spot size of the lens must be comparable or smaller than the pixel size on the sensor (Figures 1 and 2). To do this, more elements, and higher precision elements, are required for megapixel camera lenses which can make them more costly than their analog counterparts. Figure 1 Figure 2 Spot size of a megapixel lens is much smaller, required Spot size of a standard lens won't allow sharp focus on for good focus on megapixel sensors. a megapixel sensor. -
EVERYDAY MAGIC Bokeh
EVERYDAY MAGIC Bokeh “Our goal should be to perceive the extraordinary in the ordinary, and when we get good enough, to live vice versa, in the ordinary extraordinary.” ~ Eric Booth Welcome to Lesson Two of Everyday Magic. In this week’s lesson we are going to dig deep into those magical little orbs of light in a photograph known as bokeh. Pronounced BOH-Kə (or BOH-kay), the word “bokeh” is an English translation of the Japanese word boke, which means “blur” or “haze”. What is Bokeh? bokeh. And it is the camera lens and how it renders the out of focus light in Photographically speaking, bokeh is the background that gives bokeh its defined as the aesthetic quality of the more or less circular appearance. blur produced by the camera lens in the out-of-focus parts of an image. But what makes this unique visual experience seem magical is the fact that we are not able to ‘see’ bokeh with our superior human vision and excellent depth of field. Bokeh is totally a function ‘seeing’ through the lens. Playing with Focal Distance In addition to a shallow depth of field, the bokeh in an image is also determined by 1) the distance between the subject and the background and 2) the distance between the lens and the subject. Depending on how you Bokeh and Depth of Field compose your image, the bokeh can be smooth and ‘creamy’ in appearance or it The key to achieving beautiful bokeh in can be livelier and more energetic. your images is by shooting with a shallow depth of field (DOF) which is the amount of an image which is appears acceptably sharp. -
Using Depth Mapping to Realize Bokeh Effect with a Single Camera Android Device EE368 Project Report Authors (SCPD Students): Jie Gong, Ran Liu, Pradeep Vukkadala
Using Depth Mapping to realize Bokeh effect with a single camera Android device EE368 Project Report Authors (SCPD students): Jie Gong, Ran Liu, Pradeep Vukkadala Abstract- In this paper we seek to produce a bokeh Bokeh effect is usually achieved in high end SLR effect with a single image taken from an Android device cameras using portrait lenses that are relatively large in size by post processing. Depth mapping is the core of Bokeh and have a shallow depth of field. It is extremely difficult effect production. A depth map is an estimate of depth to achieve the same effect (physically) in smart phones at each pixel in the photo which can be used to identify which have miniaturized camera lenses and sensors. portions of the image that are far away and belong to However, the latest iPhone 7 has a portrait mode which can the background and therefore apply a digital blur to the produce Bokeh effect thanks to the dual cameras background. We present algorithms to determine the configuration. To compete with iPhone 7, Google recently defocus map from a single input image. We obtain a also announced that the latest Google Pixel Phone can take sparse defocus map by calculating the ratio of gradients photos with Bokeh effect, which would be achieved by from original image and reblured image. Then, full taking 2 photos at different depths to camera and defocus map is obtained by propagating values from combining then via software. There is a gap that neither of edges to entire image by using nearest neighbor method two biggest players can achieve Bokeh effect only using a and matting Laplacian. -
Carl Zeiss Oberkochen Large Format Lenses 1950-1972
Large format lenses from Carl Zeiss Oberkochen 1950-1972 © 2013-2019 Arne Cröll – All Rights Reserved (this version is from October 4, 2019) Carl Zeiss Jena and Carl Zeiss Oberkochen Before and during WWII, the Carl Zeiss company in Jena was one of the largest optics manufacturers in Germany. They produced a variety of lenses suitable for large format (LF) photography, including the well- known Tessars and Protars in several series, but also process lenses and aerial lenses. The Zeiss-Ikon sister company in Dresden manufactured a range of large format cameras, such as the Zeiss “Ideal”, “Maximar”, Tropen-Adoro”, and “Juwel” (Jewel); the latter camera, in the 3¼” x 4¼” size, was used by Ansel Adams for some time. At the end of World War II, the German state of Thuringia, where Jena is located, was under the control of British and American troops. However, the Yalta Conference agreement placed it under Soviet control shortly thereafter. Just before the US command handed the administration of Thuringia over to the Soviet Army, American troops moved a considerable part of the leading management and research staff of Carl Zeiss Jena and the sister company Schott glass to Heidenheim near Stuttgart, 126 people in all [1]. They immediately started to look for a suitable place for a new factory and found it in the small town of Oberkochen, just 20km from Heidenheim. This led to the foundation of the company “Opton Optische Werke” in Oberkochen, West Germany, on Oct. 30, 1946, initially as a full subsidiary of the original factory in Jena.