Correcting Lens Distortions in Digital Photographs

Total Page:16

File Type:pdf, Size:1020Kb

Correcting Lens Distortions in Digital Photographs Correcting Lens Distortions in Digital Photographs Wolfgang Hugemann c 2010 by EVU Abstract The wide-angle lenses (or rather zoom lenses when set to short focal length) typically produce a pronounced barrel distortion. This lens distortion affects damage mappings (i. e. the superposition of damage photographs) as well as perspective rectifications. Lens distortion can however be mostly corrected by applying suitable algorithmic transformations to the digital photograph. The paper presents the algorithms used for this correction, together with programs that perform either the entire task of correction or that allow one to determine the lens correction parameters. The paper concludes with some (rectified) example images and an estimation of the gains in accuracy achieved by applying lens correction algorithms. Introduction By the use of suitable software, lens distortions can be corrected in retrospect, eliminating much Ideally, a photograph should be a perfect pers- of the error produced by unsatisfactory shots. In pective mapping of the photographed scene. the following, we will present mathematical ap- This holds especially when the photograph is fur- proaches to describe lens distortion, present some ther processed into a to-scale representation of programs that will do most of the job for you, the pictured object, as often is the case in acci- and demonstrate the accuracy achieved. dent reconstruction, e. g. for shots of damaged vehicles or of the road surface – the latter often Modelling lens distortion taken from an elevated position and then recti- fied. When transforming picture coordinates into real- To keep lens distortions reasonably small, the world coordinates, we mostly use cartesian sys- general advice is to use a small-angle lens, if tems for both. This coordinate system seems to possible a telephoto lens. In practice, this is fit the problem most naturally, as photographs however often impossible, as the space around are rectangular and some (not very) special set- the photographed object is limited and the re- up situations (so-called nadir or coplanar photo- quired distance to the object cannot be achie- graphs) are simply to-scale mappings of the pho- ved. This holds especially for shots of the road tographed plane. surface, which are mostly taken by use of wide- When describing lens distortions, we should angle lenses in order to cover the desired space. however use a polar coordinate system, with the Furthermore, there are a lot of ‘external’ photo- lens’s main axis as its origin: obviously, the lens graphs, over which the reconstructionist has no is an axially symmetric object, so we should ex- influence on the camera settings. These are pic- pect all distortions to be rotation-symmetric. We tures taken by the people involved in the accident assume the lens’s main axis (the principal axis) or by the police, who – at least in Germany – of- to meet the image plane at an exact right angle, ten have to make do with inadequate equipment. at the principal point. 1 Wolfgang Hugemann In order to describe lens distortions, it is suffi- cient to investigate coplanar photographs, as any additional distortion created by the camera set- up will just be to the perspective and can be attended to in a later step. In coplanar pers- pective mapping, the real-world coordinates are just a fixed multiple of the image coordinates. It is common to denote real-world coordinates by capital letters X, Y and image coordinates by small letters x, y. So for a coplanar perspective a) barrel b) pincushion mapping we arrive at Fig. 1: Common lens distortions [2] X − X0 = c1 (ˆx − x0) (1) Y − Y0 = c2 (ˆy − y0) (2) This offset of the principal point xˆp, yˆc is camera- specific, as it results from manufacturing tole- These equations allow one to define the origins of rances in regard to the mutual mounting of the both coordinate systems freely. Common choices camera sensor and the lens. The distortion func- for the coordinate origin in digital photographs tions f(r), g(r) are however specific for a certain are the upper left corner and the principal point, make and model of camera, if the (zoom) lens which should ideally coincide with the middle is non-interchangeable, as is the case for consu- point of the image. Furthermore, the equations mer and mobile phone cameras. For interchan- consider different scale factors for the x- and y- geable lenses, the distortion is lens-specific, pos- directions, c1 and c2. For digital photographs, sibly needing some correction in regard to the these scale factors are identical, but video came- exact sensor size of the camera it is mounted on. ras might (virtually) use non-square pixels, as is It is common to model the distortion by di- the case in DV cameras. mensionless functions f(r), g(r, α), like in the In the equations above we used x,ˆ yˆ rather than above equations. Moreover, the radius r is of- x, y to denote the image coordinates of the ideal ten normalised to the dimensions of the sensor, perspective mapping: this is the common way to such that f(r) and r are both of magnitude one. denote estimates, and estimates they are, having to be derived from the physical image coordi- Experience shows that the effects of radial lens nates x, y. distortion f(r) exceed those of torsional distor- In order to describe lens distortion, it suffices tion g(r, α) at least by an order of magnitude. to establish the relationship between the physi- Consequently, most software only models radial cal coordinates of a pixel x, y and the coordi- distortion, i. e. tries to determine the functional nates x,ˆ yˆ of the ideal perspective mapping which relationship f(r) for a certain make and model of should be used in the above equations. In the fol- camera, for example, an SLR camera combined lowing, we assume x,ˆ yˆ to refer to the principal with a specific lens. For zoom lenses, the functio- point and x, y to refer to the physical centre of nal relationship f(r) is specific for a certain focal the image, the offset of the principal point being length, i. e. the functional relationship has to be denoted by xˆp, yˆp. We can split lens distortion modelled for different settings of the focal length. into a radial and a tangential (torsional) part Basically, we have to distinguish between, figure 1 [2]: rˆ2 =x ˆ2 +y ˆ2 (3) barrel distortion f(r) < 1 r2 = (x − xˆ )2 + (y − yˆ )2 (4) p p The apparent effect is that of an image rˆ = f(r) r (5) which has been mapped around a sphere or rαˆ = g(r, α) rα (6) barrel. 2 Correcting Lens Distortions in Digital Photographs pincushion distortion f(r) > 1 The visible effect is that lines that do not go through the centre of the image are bowed inwards, towards the centre of the image, like a pincushion. Most consumer cameras show pronounced barrel distortion when set to short focal length. Modelling radial distortion The most common functional approach for f(r) is a polynomial 2 n f(r) = 1 + a1r + a2r + ... + anr (7) Fig. 2: PTlens’s program window (stand-alone version) Optical theory shows that this polynomial should only feature even powers points P0,P1, ..., Pn is known to lie on a straight 2 4 2n line, their images p0, p1, ..., pn should also fall f(r) = 1 + a2r + a4r + ... + a2nr (8) onto a straight line, i. e. satisfy the equation In practice, the number of parameters is often ~pi = ~p0 + %i~e (12) limited to about three, i. e. either Any deviation from this rule has to be attributed 2 3 to lens distortion f(r) = 1 + a1r + a2r + a3r (9) ~pi = f~(~pi) (13) or We need two points to determine the two para- 2 4 6 f(r) = 1 + a1r + a2r + a3r (10) meters defining a straight line. Each additional point will then provide one more equation to de- with some of the coefficients possibly being zero. termine the parameters used in f~(~r). So if our functional approach is Tangential distortion 2 rˆ = (1 + a1r + a2r ) r (14) The most common model incorporating tangen- we would have to provide at least four points on tial distortion is the Brown-Conrady model [6, 5], one straight line in order to determine the two which adds a tangential component to the radial sought parameters a and a . In practice, cali- distortion 1 2 bration programs mostly use a rectangular grid ! ! xˆ x of straight lines, often a chequerboard, to gene- = (1 + a r2 + a r4 + a r6) yˆ 1 2 3 y rate a set of equations and then calculate the mapping parameters by a nonlinear least-squares 2 2 ! 2a4xy + a5(r + 2x ) fit. Some programs generate the set of control + 2 2 (11) a4(r + 2y ) + 2a5xy points on their own, often using pre-defined tem- plates; other programs require the user to select Determining the lens parameters the control points from the image. When determining the lens parameters, all pro- Ready-made solutions grams rely on the same paradigm: the ideal pers- pective mapping should map real world straight Fortunately, there are quite a few programs that lines to straight lines in the image. So if a set of will correct lens distortion in arbitrary photo- 3 Wolfgang Hugemann graphs automatically. These programs rely on the EXIF information embedded in the digital photograph, providing the make, model and fo- cal length. They are shipped with an extensive camera/lens database, providing the correction parameters for a variety of camera/lens combi- nations.
Recommended publications
  • Breaking Down the “Cosine Fourth Power Law”
    Breaking Down The “Cosine Fourth Power Law” By Ronian Siew, inopticalsolutions.com Why are the corners of the field of view in the image captured by a camera lens usually darker than the center? For one thing, camera lenses by design often introduce “vignetting” into the image, which is the deliberate clipping of rays at the corners of the field of view in order to cut away excessive lens aberrations. But, it is also known that corner areas in an image can get dark even without vignetting, due in part to the so-called “cosine fourth power law.” 1 According to this “law,” when a lens projects the image of a uniform source onto a screen, in the absence of vignetting, the illumination flux density (i.e., the optical power per unit area) across the screen from the center to the edge varies according to the fourth power of the cosine of the angle between the optic axis and the oblique ray striking the screen. Actually, optical designers know this “law” does not apply generally to all lens conditions.2 – 10 Fundamental principles of optical radiative flux transfer in lens systems allow one to tune the illumination distribution across the image by varying lens design characteristics. In this article, we take a tour into the fascinating physics governing the illumination of images in lens systems. Relative Illumination In Lens Systems In lens design, one characterizes the illumination distribution across the screen where the image resides in terms of a quantity known as the lens’ relative illumination — the ratio of the irradiance (i.e., the power per unit area) at any off-axis position of the image to the irradiance at the center of the image.
    [Show full text]
  • Optics – Panoramic Lens Applications Revisited
    Panoramic Lens Applications Revisited Simon Thibault* M.Sc., Ph.D., Eng Director, Optics Division/Principal Optical Designer ImmerVision 2020 University, Montreal, Quebec, H3A 2A5 Canada ABSTRACT During the last few years, innovative optical design strategies to generate and control image mapping have been successful in producing high-resolution digital imagers and projectors. This new generation of panoramic lenses includes catadioptric panoramic lenses, panoramic annular lenses, visible/IR fisheye lenses, anamorphic wide-angle attachments, and visible/IR panomorph lenses. Given that a wide-angle lens images a large field of view on a limited number of pixels, a systematic pixel-to-angle mapping will help the efficient use of each pixel in the field of view. In this paper, we present several modern applications of these modern types of hemispheric lenses. Recently, surveillance and security applications have been proposed and published in Security and Defence symposium. However, modern hemispheric lens can be used in many other fields. A panoramic imaging sensor contributes most to the perception of the world. Panoramic lenses are now ready to be deployed in many optical solutions. Covered applications include, but are not limited to medical imaging (endoscope, rigiscope, fiberscope…), remote sensing (pipe inspection, crime scene investigation, archeology…), multimedia (hemispheric projector, panoramic image…). Modern panoramic technologies allow simple and efficient digital image processing and the use of standard image analysis features (motion estimation, segmentation, object tracking, pattern recognition) in the complete 360o hemispheric area. Keywords: medical imaging, image analysis, immersion, omnidirectional, panoramic, panomorph, multimedia, total situation awareness, remote sensing, wide-angle 1. INTRODUCTION Photography was invented by Daguerre in 1837, and at that time the main photographic objective was that the lens should cover a wide-angle field of view with a relatively high aperture1.
    [Show full text]
  • Panoramas Shoot with the Camera Positioned Vertically As This Will Give the Photo Merging Software More Wriggle-Room in Merging the Images
    P a n o r a m a s What is a Panorama? A panoramic photo covers a larger field of view than a “normal” photograph. In general if the aspect ratio is 2 to 1 or greater then it’s classified as a panoramic photo. This sample is about 3 times wider than tall, an aspect ratio of 3 to 1. What is a Panorama? A panorama is not limited to horizontal shots only. Vertical images are also an option. How is a Panorama Made? Panoramic photos are created by taking a series of overlapping photos and merging them together using software. Why Not Just Crop a Photo? • Making a panorama by cropping deletes a lot of data from the image. • That’s not a problem if you are just going to view it in a small format or at a low resolution. • However, if you want to print the image in a large format the loss of data will limit the size and quality that can be made. Get a Really Wide Angle Lens? • A wide-angle lens still may not be wide enough to capture the whole scene in a single shot. Sometime you just can’t get back far enough. • Photos taken with a wide-angle lens can exhibit undesirable lens distortion. • Lens cost, an auto focus 14mm f/2.8 lens can set you back $1,800 plus. What Lens to Use? • A standard lens works very well for taking panoramic photos. • You get minimal lens distortion, resulting in more realistic panoramic photos. • Choose a lens or focal length on a zoom lens of between 35mm and 80mm.
    [Show full text]
  • Package 'Magick'
    Package ‘magick’ August 18, 2021 Type Package Title Advanced Graphics and Image-Processing in R Version 2.7.3 Description Bindings to 'ImageMagick': the most comprehensive open-source image processing library available. Supports many common formats (png, jpeg, tiff, pdf, etc) and manipulations (rotate, scale, crop, trim, flip, blur, etc). All operations are vectorized via the Magick++ STL meaning they operate either on a single frame or a series of frames for working with layers, collages, or animation. In RStudio images are automatically previewed when printed to the console, resulting in an interactive editing environment. The latest version of the package includes a native graphics device for creating in-memory graphics or drawing onto images using pixel coordinates. License MIT + file LICENSE URL https://docs.ropensci.org/magick/ (website) https://github.com/ropensci/magick (devel) BugReports https://github.com/ropensci/magick/issues SystemRequirements ImageMagick++: ImageMagick-c++-devel (rpm) or libmagick++-dev (deb) VignetteBuilder knitr Imports Rcpp (>= 0.12.12), magrittr, curl LinkingTo Rcpp Suggests av (>= 0.3), spelling, jsonlite, methods, knitr, rmarkdown, rsvg, webp, pdftools, ggplot2, gapminder, IRdisplay, tesseract (>= 2.0), gifski Encoding UTF-8 RoxygenNote 7.1.1 Language en-US NeedsCompilation yes Author Jeroen Ooms [aut, cre] (<https://orcid.org/0000-0002-4035-0289>) Maintainer Jeroen Ooms <[email protected]> 1 2 analysis Repository CRAN Date/Publication 2021-08-18 10:10:02 UTC R topics documented: analysis . .2 animation . .3 as_EBImage . .6 attributes . .7 autoviewer . .7 coder_info . .8 color . .9 composite . 12 defines . 14 device . 15 edges . 17 editing . 18 effects . 22 fx .............................................. 23 geometry . 24 image_ggplot .
    [Show full text]
  • Estimation and Correction of the Distortion in Forensic Image Due to Rotation of the Photo Camera
    Master Thesis Electrical Engineering February 2018 Master Thesis Electrical Engineering with emphasis on Signal Processing February 2018 Estimation and Correction of the Distortion in Forensic Image due to Rotation of the Photo Camera Sathwika Bavikadi Venkata Bharath Botta Department of Applied Signal Processing Blekinge Institute of Technology SE–371 79 Karlskrona, Sweden This thesis is submitted to the Department of Applied Signal Processing at Blekinge Institute of Technology in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering with Emphasis on Signal Processing. Contact Information: Author(s): Sathwika Bavikadi E-mail: [email protected] Venkata Bharath Botta E-mail: [email protected] Supervisor: Irina Gertsovich University Examiner: Dr. Sven Johansson Department of Applied Signal Processing Internet : www.bth.se Blekinge Institute of Technology Phone : +46 455 38 50 00 SE–371 79 Karlskrona, Sweden Fax : +46 455 38 50 57 Abstract Images, unlike text, represent an effective and natural communica- tion media for humans, due to their immediacy and the easy way to understand the image content. Shape recognition and pattern recog- nition are one of the most important tasks in the image processing. Crime scene photographs should always be in focus and there should be always be a ruler be present, this will allow the investigators the ability to resize the image to accurately reconstruct the scene. There- fore, the camera must be on a grounded platform such as tripod. Due to the rotation of the camera around the camera center there exist the distortion in the image which must be minimized.
    [Show full text]
  • Im Agemagick
    Convert, Edit, and Compose Images Magick ge a m I ImageMagick User's Guide version 5.4.8 John Cristy Bob Friesenhahn Glenn Randers-Pehrson ImageMagick Studio LLC http://www.imagemagick.org Copyright Copyright (C) 2002 ImageMagick Studio, a non-profit organization dedicated to making software imaging solutions freely available. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (“ImageMagick”), to deal in ImageMagick without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of ImageMagick, and to permit persons to whom the ImageMagick is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of ImageMagick. The software is provided “as is”, without warranty of any kind, express or im- plied, including but not limited to the warranties of merchantability, fitness for a particular purpose and noninfringement. In no event shall ImageMagick Studio be liable for any claim, damages or other liability, whether in an action of con- tract, tort or otherwise, arising from, out of or in connection with ImageMagick or the use or other dealings in ImageMagick. Except as contained in this notice, the name of the ImageMagick Studio shall not be used in advertising or otherwise to promote the sale, use or other dealings in ImageMagick without prior written authorization from the ImageMagick Studio. v Contents Preface . xiii Part 1: Quick Start Guide ¡ ¡ ¢ £ ¢ ¡ ¢ £ ¢ ¡ ¡ ¡ ¢ £ ¡ ¢ £ ¢ ¡ ¢ £ ¢ ¡ ¡ ¡ ¢ £ ¢ ¡ ¢ 1 1 Introduction . 3 1.1 What is ImageMagick .
    [Show full text]
  • A 7.5X Afocal Zoom Lens Design and Kernel Aberration Correction Using Reversed Ray Tracing Methods
    A 7.5X Afocal Zoom Lens Design and Kernel Aberration Correction Using Reversed Ray Tracing Methods Item Type text; Electronic Thesis Authors Zhou, Xi Publisher The University of Arizona. Rights Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author. Download date 23/09/2021 23:12:56 Link to Item http://hdl.handle.net/10150/634395 A 7.5X Afocal Zoom Lens Design and Kernel Aberration Correction Using Reversed Ray Tracing Methods by Xi Zhou ________________________________ Copyright © Xi Zhou 2019 A Thesis Submitted to the Faculty of the JAMES C. WYANT COLLEGE OF OPTICAL SCIENCES In Partial Fulfillment of the requirements For the degree of MASTER OF SCIENCE In the Graduate College THE UNIVERSITY OF ARIZONA 1 2 Table of Contents List of Figures ................................................................................................................................. 5 ABSTRACT .................................................................................................................................... 9 KEYWORDS ................................................................................................................................ 10 1. INTRODUCTION .................................................................................................................... 11 1.1 Motivation ..........................................................................................................................
    [Show full text]
  • Stereo Panography
    STEREO PANOGRAPHY How I make 360 degree stereoscopic photos Thomas K Sharpless Philadelphia, PA [email protected] WHY SPHERICAL PHOTOGRAPHY? ● Omnidirectional view of a place and time ● Very high resolution possible ● Immersive experience possible WHAT IS A SPHERICAL PHOTO? ● A 360 x 180 degree image ● Processed and stored in a flat format such as equirectangular or cube map. ● Viewed piecewise on a screen using panorama player software with interactive pan and zoom. ● When viewed in a virtual reality headset, you feel that you are inside the picture. SPHERICAL IMAGE FORMATS TOP: 360x180 DEGREE EQUIRECTANGULAR MAP BOTTOM:SIX 90x90 DEGREE CUBE FACES VIEW SPHERICAL PHOTOS ON THE WEB The bathroom panorama, on 360Cities.net Are You Asleep In There? -- Ed Wilcox Same room, different artist, on Roundme.com Now And At The Hour Of Our Death -- Ashley Carrega HOW TO MAKE A SPHERICAL PHOTO ● Use a fish-eye or ultra-wide lens ● Rotate the camera around lens pupil while taking enough photos to cover the sphere ● Use stitching software to combine the photos into a seamless spherical image A PRO PANORAMIC CAMERA SONY A7r FULL FRAME MIRRORLESS CAMERA SIGMA 15 MM 160 DEGREE FISH-EYE LENS A COMPACT PANORAMIC CAMERA SONY ALPHA, SAMYANG 8mm/2.8 160 DEG. FISH-EYE PANORAMIC TRIPOD HEAD NODAL NINJA NN6 WITH MULTI-STOP ROTATOR PANORAMA BUILDING SOFTWARE ESSENTIAL ● PTGui pro ($$) or Hugin (free) ● Adobe Photoshop ($$$) or GIMP (free) VERY USEFUL ● Adobe Lightroom ($$) ● Pano2VR pro ($$) ● Image Magick (free) ESSENTIAL FOR 3D ● sView stereo panorama player (free) ● PT3D stereo stitching helper ($) OMNIDIRECTIONAL STEREO Combines two 19th century innovations: panoramic photos + stereoscopic photos Depends on 21st century digital technology Best viewed on a computer-driven stereopticon, that is, a virtual reality headset OMNISTEREO THEORY An omnistereo photo is a stereo pair of slit-scan panoramas.
    [Show full text]
  • Starkish Xpander Lens
    STARKISH XPANDER LENS Large sensor sizes create beautiful images, but they can also pose problems without the right equipment. By attaching the Xpander Lens to your cinema lenses, you can now enjoy full scale ability to shoot with large sensors including 6K without compromising focal length capability. The Kish Xpander Lens allows full sensor coverage for the ARRI Alexa and RED Dragon cameras and is compatible for a wide range of cinema lenses to prevent vignetting: . Angenieux Optimo 17-80mm zoom lens . Angenieux Optimo 24-290mm zoom lens . Angenieux Compact Optimo zooms . Cooke 18-100mm zoom lens . Angenieux 17-102mm zoom lens . Angenieux 25-250mm HR zoom lens Whether you are shooting with the ARRI Alexa Open Gate or Red Dragon 6K, the Xpander Lens is the latest critical attachment necessary to prevent vignetting so you can maximize the most out of large sensor coverage. SPECIFICATIONS Version 1,15 Version 1,21 Image expansion factor: 15 % larger (nominal) Image expansion factor: 21 % larger (nominal) Focal Length: X 1.15 Focal length 1.21 Light loss: 1/3 to ½ stop Light Loss 1/2 to 2/3 stop THE XPANDER LENS VS. THE RANGE EXTENDER Even though the Lens Xpander functions similar to the lens range extender attachment, in terms of focal length and light loss factors, there is one fundamental difference to appreciate. The Xpander gives a larger image dimension by expanding the original image to cover a larger sensor area. This accessory is designed to maintain the optical performance over the larger image area. On the other hand, the range extender is also magnifying the image (focal length gets longer) but this attachment only needs to maintain the optical performance over the original image area.
    [Show full text]
  • Effects on Map Production of Distortions in Photogrammetric Systems J
    EFFECTS ON MAP PRODUCTION OF DISTORTIONS IN PHOTOGRAMMETRIC SYSTEMS J. V. Sharp and H. H. Hayes, Bausch and Lomb Opt. Co. I. INTRODUCTION HIS report concerns the results of nearly two years of investigation of T the problem of correlation of known distortions in photogrammetric sys­ tems of mapping with observed effects in map production. To begin with, dis­ tortion is defined as the displacement of a point from its true position in any plane image formed in a photogrammetric system. By photogrammetric systems of mapping is meant every major type (1) of photogrammetric instruments. The type of distortions investigated are of a magnitude which is considered by many photogrammetrists to be negligible, but unfortunately in their combined effects this is not always true. To buyers of finished maps, you need not be alarmed by open discussion of these effects of distortion on photogrammetric systems for producing maps. The effect of these distortions does not limit the accuracy of the map you buy; but rather it restricts those who use photogrammetric systems to make maps to limits established by experience. Thus the users are limited to proper choice of flying heights for aerial photography and control of other related factors (2) in producing a map of the accuracy you specify. You, the buyers of maps are safe behind your contract specifications. The real difference that distortions cause is the final cost of the map to you, which is often established by competi­ tive bidding. II. PROBLEM In examining the problem of correlating photogrammetric instruments with their resultant effects on map production, it is natural to ask how large these distortions are, whose effects are being considered.
    [Show full text]
  • Hardware and Software for Panoramic Photography
    ROVANIEMI UNIVERSITY OF APPLIED SCIENCES SCHOOL OF TECHNOLOGY Degree Programme in Information Technology Thesis HARDWARE AND SOFTWARE FOR PANORAMIC PHOTOGRAPHY Julia Benzar 2012 Supervisor: Veikko Keränen Approved _______2012__________ The thesis can be borrowed. School of Technology Abstract of Thesis Degree Programme in Information Technology _____________________________________________________________ Author Julia Benzar Year 2012 Subject of thesis Hardware and Software for Panoramic Photography Number of pages 48 In this thesis, panoramic photography was chosen as the topic of study. The primary goal of the investigation was to understand the phenomenon of pa- noramic photography and the secondary goal was to establish guidelines for its workflow. The aim was to reveal what hardware and what software is re- quired for panoramic photographs. The methodology was to explore the existing material on the topics of hard- ware and software that is implemented for producing panoramic images. La- ter, the best available hardware and different software was chosen to take the images and to test the process of stitching the images together. The ex- periment material was the result of the practical work, such the overall pro- cess and experience, gained from the process, the practical usage of hard- ware and software, as well as the images taken for stitching panorama. The main research material was the final result of stitching panoramas. The main results of the practical project work were conclusion statements of what is the best hardware and software among the options tested. The re- sults of the work can also suggest a workflow for creating panoramic images using the described hardware and software. The choice of hardware and software was limited, so there is place for further experiments.
    [Show full text]
  • Multimedia Systems DCAP303
    Multimedia Systems DCAP303 MULTIMEDIA SYSTEMS Copyright © 2013 Rajneesh Agrawal All rights reserved Produced & Printed by EXCEL BOOKS PRIVATE LIMITED A-45, Naraina, Phase-I, New Delhi-110028 for Lovely Professional University Phagwara CONTENTS Unit 1: Multimedia 1 Unit 2: Text 15 Unit 3: Sound 38 Unit 4: Image 60 Unit 5: Video 102 Unit 6: Hardware 130 Unit 7: Multimedia Software Tools 165 Unit 8: Fundamental of Animations 178 Unit 9: Working with Animation 197 Unit 10: 3D Modelling and Animation Tools 213 Unit 11: Compression 233 Unit 12: Image Format 247 Unit 13: Multimedia Tools for WWW 266 Unit 14: Designing for World Wide Web 279 SYLLABUS Multimedia Systems Objectives: To impart the skills needed to develop multimedia applications. Students will learn: z how to combine different media on a web application, z various audio and video formats, z multimedia software tools that helps in developing multimedia application. Sr. No. Topics 1. Multimedia: Meaning and its usage, Stages of a Multimedia Project & Multimedia Skills required in a team 2. Text: Fonts & Faces, Using Text in Multimedia, Font Editing & Design Tools, Hypermedia & Hypertext. 3. Sound: Multimedia System Sounds, Digital Audio, MIDI Audio, Audio File Formats, MIDI vs Digital Audio, Audio CD Playback. Audio Recording. Voice Recognition & Response. 4. Images: Still Images – Bitmaps, Vector Drawing, 3D Drawing & rendering, Natural Light & Colors, Computerized Colors, Color Palletes, Image File Formats, Macintosh & Windows Formats, Cross – Platform format. 5. Animation: Principle of Animations. Animation Techniques, Animation File Formats. 6. Video: How Video Works, Broadcast Video Standards: NTSC, PAL, SECAM, ATSC DTV, Analog Video, Digital Video, Digital Video Standards – ATSC, DVB, ISDB, Video recording & Shooting Videos, Video Editing, Optimizing Video files for CD-ROM, Digital display standards.
    [Show full text]