Focal Length Angle of View F-Number Field of View

Total Page:16

File Type:pdf, Size:1020Kb

Focal Length Angle of View F-Number Field of View Technical Information CCTV is widely accepted as an important and valuable tool in many areas. Manual iris lenses for constant, stable lighting conditions and auto iris lenses for variable lighting conditions. Select the best suited lens for optimum performance in each application. Focal Length Rays from infinitely distance objects are condensed internally in the lens at a common point on the optical axis. The point at which the image sensor of Secondary principal point the CCTV camera is positioned, is called a focal point. By virtue of design, lenses have 2 principal points, a primary principal point & a secondary principal point, the distance between the secondary principal point and the focal point (image sensor) determines the focal length of the lens. Primary� principal point Focal length Focal point /� Image sensor Angle of View The angle formed by the 2 lines from the secondary principal point to the image sensor is called the angle of view. Therefore, the focal length of the lens is fixed regardless of the image format size of the CCTV camera. Secondary principal point Conversely, the angle of view varies in accordance with the image size. The focal lengths in the catalog are nominal and the angles of view calculated by θ� D the formula referring to the focal lengths are approximate. D� θ=2×tan-1� 2f Focal length f Image sensor F-Number The F number is the index for the amount of light that passes through a lens. The smaller the number, the greater the amount of light. The F number is a ratio between focal length and effective aperture as follows. f f�=�focal length� F Number = D D =�effective diameter Field of View he field of view varies along with the focal length of the lens as follows. W�:�width of object� H�:�height of object� w�:�width of format� ��1/2 format = 6.4mm, 1/3 format = 4.8mm,� ��1/4 format = 3.6mm� h�:�height of format� W/H w/h ��1/2 format = 4.8mm, 1/3 format = 3.6mm,� ��1/4 format = 2.7mm� f�:�focal length� L :�object distance� L f � Example : Full image of 4.5m-high object on a TV monitor w� h� f� =� =� camera: 1/3 format, Object distance: 10m� Technical Information W H L H = 4.5m = 4,500mm L = 10m = 10,000mm h f 3.6 f =� =� f=8mm H L 4,500 10,000 Technical Information Close-Up Application 1)Extension Tube(Macro Ring)� When the rays originate from a finite object distance, the rays are condensed at a point further than the focal point, while the rays from infinite Infinite rays Image sensor distance are condensed at the actual focal point. The focus adjustment moves the lens barrel toward the object to shift the focusing point at the image sensor. However, the amount of focusing adjustment is mechanically limited as sees by the minimum object distance. Extension Tube (Macro Ring) is inserted in between the lens and camera to shift the focus point Finite rays further than the mechanical limit for close-up focus. 2)Close-Up Lens� The close-up lens has a positive meniscus lens as a supplementary lens. Generally, 3 types of close-up lenses are available, close-up lens No.1, 2 & 3 have 1,000mm (1,000mm/1), 500mm (1,000mm/2), 333mm (1,000mm/3) respectively. When an object is placed at the focal point of the close-up lens, Focal length� the rays from the object are converted by the close-up lens to be parallel of close-up lens rays against the optical axis as seen on the right. This lens is effective when wishing to come closer to an object than the min. object distance of a lens, or taking close-up pictures of small objects. Depth of Field When an object is focused, it is typically observed than the area in front and behind the object is also in focus. The range in focus is called Ågdepth of fieldÅh. When the background is extended to infinity, the object distance (focusing distance) is called Åghyper focal distanceÅh. Depth of field is calculated by using the following formula. F�=�F No.� 2 f � H= H�=�hyper focal distance� C × F f�=�focal length� B�=�object distance (measured from image sensor)� T1�=�near limit� B(H+f)� T1= T2�=�far limit� H + B C = circle of least confusion� 1/2 format = 0.015mm, 1/3 format = 0.011mm,� 1/4 format = 0.008mm B(H-f)� T2= Depth of field increases when:� H - B *Focal length is shorter� *F-number is larger (F/1.4 < F/5.6)� *Object distance is longer CS and C mount CS mount as present CCTV market standard is specially designed for CCTV (flange back)� camera lens developed by PENTAX. This is to minuturize the size and to CS mount lens 12.5 CS mount camera improve the performance of lens by shortening the flange back by 5mm comparing to C mount.C and CS mount lenses are available in the present market, and CS mount is only applicable to CS mount camera.C mount lens is applicable not only to C mount camera but also to CS mount camera by using 5mm Adapter Ring (as C-CS-A ). CS mount camera Mount Surface C mount lens Focusing Plain Technical Information 5 spacer 17.5 TECHNICAL INFORMATION TECHNICAL INFORMATION Camera Format ANGLE OF VIEW The size of camera’s imaging device also affects the angle of view, with the smaller devices creating narrower angles of view It is important to know the angle of view of the lens to take in the object. Angle of view changes with focal length of lens and when used on the same lens. The format of the lens, however is irrelevant to the angle of view, it merely needs to project image size of camera.The focal length to cover the object can be calculated from the next formula. an image which will cover the device, i.e.; the same format of the camera or larger. This also means that 1/3” cameras can utilize the entire range of lenses from 1/3” to 2/3”, with a 1/3” 8mm lens giving the same angle as a 2/3” 8mm lens. The latter combination also provides increased resolution and picture quality as only the centre of the lens is being utilized, where ● Formula for calculation ● For example the optics can be ground more accurately. (1) in case of vertical size f : focal length of lens 1/2 inch camera v = 4.8mm V : Vertical size of object Vertical size of object V = 330mm(33cm) Distance from lens to object D = 2500mm(250cm) H : Horizontal size of object substitute these datas to formula (1) D : Distance from Lens to object v : vertical size of image (see the following table) h : horizontal size of image (see the following table) (2) in case of horizontal size 1/2 inch camera h = 6.4mm Horizontal size of object H = 440mm(44cm) D = 2500mm(250cm) FORMAT 2/3inch 1/2inch 1/3inch 1/4inch Distance from lens to object substitute these datas to formula (2) v 6.6mm 4.8mm 3.6mm 2.7mm h 8.8mm 6.4mm 4.8mm 3.6mm Focal Length ■ COMPARISON OF MONITORING IMAGES ※Images on 1/3"camera Object distance The focal length of the lens is measured in mm and directly relates to the angle of view that will be achieved. Short focal 2m 5m 10m 20m length provides wide angle of view and long focal length becomes telephoto, with narrow angle of view. A "normal" angle of Focal length view is similar to what we see with our own eye and has a relative focal length equal to the pick up device. The "computar" range calculator is simple device to use for estimating focal length, object dimension and angle of view, alternatively the VM300 view finder gives an optical way of finding focal length. f=2.8mm f=3.5mm f=8mm TECHNICAL INFORMATION TECHNICAL INFORMATION f=30mm F S t o p The lens usually has two measurements of F stop or aperture, the maximum aperture (minimum F stop) when the lens is fully open and minimum aperture(maximum F stop) just before the lens completely closes. The F stop has a number of effects upon the final image; a low minimum F stop will mean the lens can pass more light in dark condition, allowing the camera to produce a better image, and a maximum F stop may be necessary where there is a very high level of light or reflection, this f=50mm will prevent the camera "whiting out" and maintain constant video level. All auto iris lenses are supplied with Neutral Density filters to increase the maximum F stop. The F stop also directly affects the depth of field. 28 TECHNICAL INFORMATION TECHNICAL INFORMATION Camera Format ANGLE OF VIEW The size of camera’s imaging device also affects the angle of view, with the smaller devices creating narrower angles of view It is important to know the angle of view of the lens to take in the object. Angle of view changes with focal length of lens and when used on the same lens. The format of the lens, however is irrelevant to the angle of view, it merely needs to project image size of camera.The focal length to cover the object can be calculated from the next formula. an image which will cover the device, i.e.; the same format of the camera or larger.
Recommended publications
  • “Digital Single Lens Reflex”
    PHOTOGRAPHY GENERIC ELECTIVE SEM-II DSLR stands for “Digital Single Lens Reflex”. In simple language, a DSLR is a digital camera that uses a mirror mechanism to either reflect light from a camera lens to an optical viewfinder (which is an eyepiece on the back of the camera that one looks through to see what they are taking a picture of) or let light fully pass onto the image sensor (which captures the image) by moving the mirror out of the way. Although single lens reflex cameras have been available in various shapes and forms since the 19th century with film as the recording medium, the first commercial digital SLR with an image sensor appeared in 1991. Compared to point-and-shoot and phone cameras, DSLR cameras typically use interchangeable lenses. Take a look at the following image of an SLR cross section (image courtesy of Wikipedia): When you look through a DSLR viewfinder / eyepiece on the back of the camera, whatever you see is passed through the lens attached to the camera, which means that you could be looking at exactly what you are going to capture. Light from the scene you are attempting to capture passes through the lens into a reflex mirror (#2) that sits at a 45 degree angle inside the camera chamber, which then forwards the light vertically to an optical element called a “pentaprism” (#7). The pentaprism then converts the vertical light to horizontal by redirecting the light through two separate mirrors, right into the viewfinder (#8). When you take a picture, the reflex mirror (#2) swings upwards, blocking the vertical pathway and letting the light directly through.
    [Show full text]
  • Completing a Photography Exhibit Data Tag
    Completing a Photography Exhibit Data Tag Current Data Tags are available at: https://unl.box.com/s/1ttnemphrd4szykl5t9xm1ofiezi86js Camera Make & Model: Indicate the brand and model of the camera, such as Google Pixel 2, Nikon Coolpix B500, or Canon EOS Rebel T7. Focus Type: • Fixed Focus means the photographer is not able to adjust the focal point. These cameras tend to have a large depth of field. This might include basic disposable cameras. • Auto Focus means the camera automatically adjusts the optics in the lens to bring the subject into focus. The camera typically selects what to focus on. However, the photographer may also be able to select the focal point using a touch screen for example, but the camera will automatically adjust the lens. This might include digital cameras and mobile device cameras, such as phones and tablets. • Manual Focus allows the photographer to manually adjust and control the lens’ focus by hand, usually by turning the focus ring. Camera Type: Indicate whether the camera is digital or film. (The following Questions are for Unit 2 and 3 exhibitors only.) Did you manually adjust the aperture, shutter speed, or ISO? Indicate whether you adjusted these settings to capture the photo. Note: Regardless of whether or not you adjusted these settings manually, you must still identify the images specific F Stop, Shutter Sped, ISO, and Focal Length settings. “Auto” is not an acceptable answer. Digital cameras automatically record this information for each photo captured. This information, referred to as Metadata, is attached to the image file and goes with it when the image is downloaded to a computer for example.
    [Show full text]
  • Panoramas Shoot with the Camera Positioned Vertically As This Will Give the Photo Merging Software More Wriggle-Room in Merging the Images
    P a n o r a m a s What is a Panorama? A panoramic photo covers a larger field of view than a “normal” photograph. In general if the aspect ratio is 2 to 1 or greater then it’s classified as a panoramic photo. This sample is about 3 times wider than tall, an aspect ratio of 3 to 1. What is a Panorama? A panorama is not limited to horizontal shots only. Vertical images are also an option. How is a Panorama Made? Panoramic photos are created by taking a series of overlapping photos and merging them together using software. Why Not Just Crop a Photo? • Making a panorama by cropping deletes a lot of data from the image. • That’s not a problem if you are just going to view it in a small format or at a low resolution. • However, if you want to print the image in a large format the loss of data will limit the size and quality that can be made. Get a Really Wide Angle Lens? • A wide-angle lens still may not be wide enough to capture the whole scene in a single shot. Sometime you just can’t get back far enough. • Photos taken with a wide-angle lens can exhibit undesirable lens distortion. • Lens cost, an auto focus 14mm f/2.8 lens can set you back $1,800 plus. What Lens to Use? • A standard lens works very well for taking panoramic photos. • You get minimal lens distortion, resulting in more realistic panoramic photos. • Choose a lens or focal length on a zoom lens of between 35mm and 80mm.
    [Show full text]
  • Depth-Aware Blending of Smoothed Images for Bokeh Effect Generation
    1 Depth-aware Blending of Smoothed Images for Bokeh Effect Generation Saikat Duttaa,∗∗ aIndian Institute of Technology Madras, Chennai, PIN-600036, India ABSTRACT Bokeh effect is used in photography to capture images where the closer objects look sharp and every- thing else stays out-of-focus. Bokeh photos are generally captured using Single Lens Reflex cameras using shallow depth-of-field. Most of the modern smartphones can take bokeh images by leveraging dual rear cameras or a good auto-focus hardware. However, for smartphones with single-rear camera without a good auto-focus hardware, we have to rely on software to generate bokeh images. This kind of system is also useful to generate bokeh effect in already captured images. In this paper, an end-to-end deep learning framework is proposed to generate high-quality bokeh effect from images. The original image and different versions of smoothed images are blended to generate Bokeh effect with the help of a monocular depth estimation network. The proposed approach is compared against a saliency detection based baseline and a number of approaches proposed in AIM 2019 Challenge on Bokeh Effect Synthesis. Extensive experiments are shown in order to understand different parts of the proposed algorithm. The network is lightweight and can process an HD image in 0.03 seconds. This approach ranked second in AIM 2019 Bokeh effect challenge-Perceptual Track. 1. Introduction tant problem in Computer Vision and has gained attention re- cently. Most of the existing approaches(Shen et al., 2016; Wad- Depth-of-field effect or Bokeh effect is often used in photog- hwa et al., 2018; Xu et al., 2018) work on human portraits by raphy to generate aesthetic pictures.
    [Show full text]
  • Estimation and Correction of the Distortion in Forensic Image Due to Rotation of the Photo Camera
    Master Thesis Electrical Engineering February 2018 Master Thesis Electrical Engineering with emphasis on Signal Processing February 2018 Estimation and Correction of the Distortion in Forensic Image due to Rotation of the Photo Camera Sathwika Bavikadi Venkata Bharath Botta Department of Applied Signal Processing Blekinge Institute of Technology SE–371 79 Karlskrona, Sweden This thesis is submitted to the Department of Applied Signal Processing at Blekinge Institute of Technology in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering with Emphasis on Signal Processing. Contact Information: Author(s): Sathwika Bavikadi E-mail: [email protected] Venkata Bharath Botta E-mail: [email protected] Supervisor: Irina Gertsovich University Examiner: Dr. Sven Johansson Department of Applied Signal Processing Internet : www.bth.se Blekinge Institute of Technology Phone : +46 455 38 50 00 SE–371 79 Karlskrona, Sweden Fax : +46 455 38 50 57 Abstract Images, unlike text, represent an effective and natural communica- tion media for humans, due to their immediacy and the easy way to understand the image content. Shape recognition and pattern recog- nition are one of the most important tasks in the image processing. Crime scene photographs should always be in focus and there should be always be a ruler be present, this will allow the investigators the ability to resize the image to accurately reconstruct the scene. There- fore, the camera must be on a grounded platform such as tripod. Due to the rotation of the camera around the camera center there exist the distortion in the image which must be minimized.
    [Show full text]
  • Session Outline: History of the Daguerreotype
    Fundamentals of the Conservation of Photographs SESSION: History of the Daguerreotype INSTRUCTOR: Grant B. Romer SESSION OUTLINE ABSTRACT The daguerreotype process evolved out of the collaboration of Louis Jacques Mande Daguerre (1787- 1851) and Nicephore Niepce, which began in 1827. During their experiments to invent a commercially viable system of photography a number of photographic processes were evolved which contributed elements that led to the daguerreotype. Following Niepce’s death in 1833, Daguerre continued experimentation and discovered in 1835 the basic principle of the process. Later, investigation of the process by prominent scientists led to important understandings and improvements. By 1843 the process had reached technical perfection and remained the commercially dominant system of photography in the world until the mid-1850’s. The image quality of the fine daguerreotype set the photographic standard and the photographic industry was established around it. The standardized daguerreotype process after 1843 entailed seven essential steps: plate polishing, sensitization, camera exposure, development, fixation, gilding, and drying. The daguerreotype process is explored more fully in the Technical Note: Daguerreotype. The daguerreotype image is seen as a positive to full effect through a combination of the reflection the plate surface and the scattering of light by the imaging particles. Housings exist in great variety of style, usually following the fashion of miniature portrait presentation. The daguerreotype plate is extremely vulnerable to mechanical damage and the deteriorating influences of atmospheric pollutants. Hence, highly colored and obscuring corrosion films are commonly found on daguerreotypes. Many daguerreotypes have been damaged or destroyed by uninformed attempts to wipe these films away.
    [Show full text]
  • Photography and Photomontage in Landscape and Visual Impact Assessment
    Photography and Photomontage in Landscape and Visual Impact Assessment Landscape Institute Technical Guidance Note Public ConsuDRAFTltation Draft 2018-06-01 To the recipient of this draft guidance The Landscape Institute is keen to hear the views of LI members and non-members alike. We are happy to receive your comments in any form (eg annotated PDF, email with paragraph references ) via email to [email protected] which will be forwarded to the Chair of the working group. Alternatively, members may make comments on Talking Landscape: Topic “Photography and Photomontage Update”. You may provide any comments you consider would be useful, but may wish to use the following as a guide. 1) Do you expect to be able to use this guidance? If not, why not? 2) Please identify anything you consider to be unclear, or needing further explanation or justification. 3) Please identify anything you disagree with and state why. 4) Could the information be better-organised? If so, how? 5) Are there any important points that should be added? 6) Is there anything in the guidance which is not required? 7) Is there any unnecessary duplication? 8) Any other suggeDRAFTstions? Responses to be returned by 29 June 2018. Incidentally, the ##’s are to aid a final check of cross-references before publication. Contents 1 Introduction Appendices 2 Background Methodology App 01 Site equipment 3 Photography App 02 Camera settings - equipment and approaches needed to capture App 03 Dealing with panoramas suitable images App 04 Technical methodology template
    [Show full text]
  • Chapter 3 (Aberrations)
    Chapter 3 Aberrations 3.1 Introduction In Chap. 2 we discussed the image-forming characteristics of optical systems, but we limited our consideration to an infinitesimal thread- like region about the optical axis called the paraxial region. In this chapter we will consider, in general terms, the behavior of lenses with finite apertures and fields of view. It has been pointed out that well- corrected optical systems behave nearly according to the rules of paraxial imagery given in Chap. 2. This is another way of stating that a lens without aberrations forms an image of the size and in the loca- tion given by the equations for the paraxial or first-order region. We shall measure the aberrations by the amount by which rays miss the paraxial image point. It can be seen that aberrations may be determined by calculating the location of the paraxial image of an object point and then tracing a large number of rays (by the exact trigonometrical ray-tracing equa- tions of Chap. 10) to determine the amounts by which the rays depart from the paraxial image point. Stated this baldly, the mathematical determination of the aberrations of a lens which covered any reason- able field at a real aperture would seem a formidable task, involving an almost infinite amount of labor. However, by classifying the various types of image faults and by understanding the behavior of each type, the work of determining the aberrations of a lens system can be sim- plified greatly, since only a few rays need be traced to evaluate each aberration; thus the problem assumes more manageable proportions.
    [Show full text]
  • Determination of Focal Length of a Converging Lens and Mirror
    Physics 41- Lab 5 Determination of Focal Length of A Converging Lens and Mirror Objective: Apply the thin-lens equation and the mirror equation to determine the focal length of a converging (biconvex) lens and mirror. Apparatus: Biconvex glass lens, spherical concave mirror, meter ruler, optical bench, lens holder, self-illuminated object (generally a vertical arrow), screen. Background In class you have studied the physics of thin lenses and spherical mirrors. In today's lab, we will analyze several physical configurations using both biconvex lenses and concave mirrors. The components of the experiment, that is, the optics device (lens or mirror), object and image screen, will be placed on a meter stick and may be repositioned easily. The meter stick is used to determine the position of each component. For our object, we will make use of a light source with some distinguishing marking, such as an arrow or visible filament. Light from the object passes through the lens and the resulting image is focused onto a white screen. One characteristic feature of all thin lenses and concave mirrors is the focal length, f, and is defined as the image distance of an object that is positioned infinitely far way. The focal lengths of a biconvex lens and a concave mirror are shown in Figures 1 and 2, respectively. Notice the incoming light rays from the object are parallel, indicating the object is very far away. The point, C, in Figure 2 marks the center of curvature of the mirror. The distance from C to any point on the mirror is known as the radius of curvature, R.
    [Show full text]
  • Depth of Field PDF Only
    Depth of Field for Digital Images Robin D. Myers Better Light, Inc. In the days before digital images, before the advent of roll film, photography was accomplished with photosensitive emulsions spread on glass plates. After processing and drying the glass negative, it was contact printed onto photosensitive paper to produce the final print. The size of the final print was the same size as the negative. During this period some of the foundational work into the science of photography was performed. One of the concepts developed was the circle of confusion. Contact prints are usually small enough that they are normally viewed at a distance of approximately 250 millimeters (about 10 inches). At this distance the human eye can resolve a detail that occupies an angle of about 1 arc minute. The eye cannot see a difference between a blurred circle and a sharp edged circle that just fills this small angle at this viewing distance. The diameter of this circle is called the circle of confusion. Converting the diameter of this circle into a size measurement, we get about 0.1 millimeters. If we assume a standard print size of 8 by 10 inches (about 200 mm by 250 mm) and divide this by the circle of confusion then an 8x10 print would represent about 2000x2500 smallest discernible points. If these points are equated to their equivalence in digital pixels, then the resolution of a 8x10 print would be about 2000x2500 pixels or about 250 pixels per inch (100 pixels per centimeter). The circle of confusion used for 4x5 film has traditionally been that of a contact print viewed at the standard 250 mm viewing distance.
    [Show full text]
  • How Does the Light Adjustable Lens Work? What Should I Expect in The
    How does the Light Adjustable Lens work? The unique feature of the Light Adjustable Lens is that the shape and focusing characteristics can be changed after implantation in the eye using an office-based UV light source called a Light Delivery Device or LDD. The Light Adjustable Lens itself has special particles (called macromers), which are distributed throughout the lens. When ultraviolet (UV) light from the LDD is directed to a specific area of the lens, the particles in the path of the light connect with other particles (forming polymers). The remaining unconnected particles then move to the exposed area. This movement causes a highly predictable change in the curvature of the lens. The new shape of the lens will match the prescription you selected during your eye exam. What should I expect in the period after cataract surgery? Please follow all instructions provided to you by your eye doctor and staff, including wearing of the UV-blocking glasses that will be provided to you. As with any cataract surgery, your vision may not be perfect after surgery. While your eye doctor selected the lens he or she anticipated would give you the best possible vision, it was only an estimate. Fortunately, you have selected the Light Adjustable Lens! In the next weeks, you and your eye doctor will work together to optimize your vision. Please make sure to pay close attention to your vision and be prepared to discuss preferences with your eye doctor. Why do I have to wear UV-blocking glasses? The UV-blocking glasses you are provided with protect the Light Adjustable Lens from UV light sources other than the LDD that your doctor will use to optimize your vision.
    [Show full text]
  • Depth of Focus (DOF)
    Erect Image Depth of Focus (DOF) unit: mm Also known as ‘depth of field’, this is the distance (measured in the An image in which the orientations of left, right, top, bottom and direction of the optical axis) between the two planes which define the moving directions are the same as those of a workpiece on the limits of acceptable image sharpness when the microscope is focused workstage. PG on an object. As the numerical aperture (NA) increases, the depth of 46 focus becomes shallower, as shown by the expression below: λ DOF = λ = 0.55µm is often used as the reference wavelength 2·(NA)2 Field number (FN), real field of view, and monitor display magnification unit: mm Example: For an M Plan Apo 100X lens (NA = 0.7) The depth of focus of this objective is The observation range of the sample surface is determined by the diameter of the eyepiece’s field stop. The value of this diameter in 0.55µm = 0.6µm 2 x 0.72 millimeters is called the field number (FN). In contrast, the real field of view is the range on the workpiece surface when actually magnified and observed with the objective lens. Bright-field Illumination and Dark-field Illumination The real field of view can be calculated with the following formula: In brightfield illumination a full cone of light is focused by the objective on the specimen surface. This is the normal mode of viewing with an (1) The range of the workpiece that can be observed with the optical microscope. With darkfield illumination, the inner area of the microscope (diameter) light cone is blocked so that the surface is only illuminated by light FN of eyepiece Real field of view = from an oblique angle.
    [Show full text]