Pi Camera! By: Juan Medina

Total Page:16

File Type:pdf, Size:1020Kb

Pi Camera! By: Juan Medina Image Sensors: Pi Camera! By: Juan medina Outline: ● Digital Cameras ○ History (Mariner) and connectivity/social impact: ● What is an image sensor? ​ ​ ○ Types ○ Color separation and Performance metric ● Pi Camera ○ purpose, price, datasheet, examples! ● Conclusions Digital Cameras: History and Social Impact ​ On July 15 of 1965, the Mariner 4 spacecraft obtained the first flyby digital image (Figure 1: https://en.wikipedia.org/wiki/Mariner_4) of Mars with a camera system ​ ​ designed by the NASA Jet Propulsion Laboratory. Such system utilized a video camera tube followed by a digitizer (something somehow distant from what current digital cameras are) to record 21 images on tape that where the transmitted to Earth. By the way, such transmission took about 20 days to complete. In 1975, Steven Sasson, engineer at Kodak, build the first digital camera using a newly developed CCD (will explain later) image sensor sold Fairchild Semiconductor in 1973. Such camera was 3.6 Kg, had 0.01 megapixels, and took 23 seconds to capture the first image (Figure 2: http://giantsofscience.weebly.com/steven-sasson.html). ​ ​ Nowadays, digital cameras are a fundamental component of our lives. A tremendous amount of digital visual content is created every day (Figure 3: http://rocketpost.com/blog/instagram-direct-how-to-use-four-tips/). From our personal ​ life, up to an academic and professional environment, digital cameras have changed our life dramatically; image sensors are everywhere: phones, pc, security, robots, scanners, printers, etc. What’s an image sensor? An image sensor transforms the attenuation of electromagnetic radiation as it passess through or is reflected off objects into small currents that convey the information. Image sensors have a wide variety of applications which include: digital cameras, medical imaging, night vision, thermal imaging, radar, and others. The first analogue sensors used where the video camera tubes (e.g. Mariner 4). Wikipedia’s description of their functionality is: "The cathode ray is scanned across a target which is illuminated by the scene to be broadcast. The current, then, is dependent on the brightness of the image on the target." I’ll just describe the Image Orthicon (Figure 4: https://en.wikipedia.org/wiki/Video_camera_tube) which was used in the very first ​ televisions. Light passes through a camera lens and falls into a photocathode (photosensitive plate at negative potential) where is converted into an electron image. The electrons are then accelerated and gunned against a glass where the image is created. Video Camera Tubes were followed by semiconductor charge-coupled devices (CCD) and active pixel sensors in complementary metal-oxide-semiconductor (CMOS). Figure 5 (http://www.dpreview.com/forums/post/52351544), which I obtained from a blog post, ​ ​ presents the general functionality of a modern image sensor. The incoming infrared light is first filtered with an IR-Blocking Filter (a). Then a physical color filter array (CFA) controls the color light reaching each color blind sensor cell (b). The latters transforms the light into electricity which is then digitized. Lastly, millions of such cells are arranged to construct a megapixel image sensor (d). CCDs (Charged Coupled Devices) In this modern sensor (the one used by Steven Sasson), an image is projected through a lens onto the capacitor array (the photoactive region) causing each capacitor to accumulate an electric charge that is proportional to the light intensity at that location. After the image is projected into onto the array a control circuit causes each capacitor to transfer its content to the next one (Figure 6: https://www.microscopyu.com/articles/digitalimaging/ccdintro.html). Lastly, a charge ​ amplifier is used to convert the currents into a sequence of voltages that can then be sampled and digitized. This whole process is brilliantly represented by a simple example by Nixon (Figure 7: https://www.microscopyu.com/articles/digitalimaging/ccdintro.html). A ​ ​ gate is opened, rain drops fall, buckets are filled. A parallel register drops the water row by row in a serial bucket array. Lastly, the content of each bucket is dropped in a “Calibrated Measuring Container” and the cycle repeats. Complementary metal–oxide–semiconductor (CMOS) On the other side, CMOS is an active pixel sensor which consist of an integrated circuit containing an array of pixel sensors. Each pixel sensor contains a photodetector and an active amplifier that is constructed with CMOS transistors. Figure 7 (http://www.digitalbolex.com/global-shutter/) presents a very instructive schematic of ​ ​ how CMOS and CDDs functionality differ. On the left, light is sensed by a photodiode and generates an electric charge that is stored in an electron transfer register. As explained before a control algorithm moves such charges across the vertical and horizontal registers where the signal is finally amplified and digitized. Differently, on the right we see that as soon as light arrives to the photodiodes the CMOS amplifiers maximize the signal and send it through metal wires. CDDs vs CMOS Even though there is no substantial difference in image quality, CMOS can be implemented with fewer components, use less power, and can be read faster than CCD sensors. As such, CMOS are less expensive to manufacture and therefore more common. There are some hybrid sensors that leverage the advantages of CMOS and CDDs. High count pixel cameras still use CDD. Figure 8 (http://www.digitalbolex.com/global-shutter/) ​ ​ present some image sensor trends. CD's are still used for high performance applications such as Professional DSC, Motion Analysis and Medical Imaging. CMOS have a wider range of applications: Automotive, Toys, Phones, Biometrics, etc. Color Separation Color separation is an important topic that I would like to cover briefly. Once you have your image sensors cells (IR filter, color filter, signal transducer) you have to arrange them in some way in order to obtain an RGB image. There are different types for color-separation mechanisms such as Bayer Filter sensor, Foveon X3 sensor and 3CCD. In this blog post I’ll just cover the Bayer filter color filter array infrastructure. Figure 9 (https://en.wikipedia.org/wiki/Bayer_filter) presents the Bayer filter CFA pattern. This ​ ​ filter pattern is 50% green, 25% red and 25% blue, that’s why it’s also called RGBG. This proportion is used to mimic the physiology of the human eyes. Demosaicing is the process of translating the pattern into a 3 level matrix. For each pixel there has to be a value for Red, Blue and Green. There are different techniques for demosaicing. For instance, at a green pixel there are always 2 red neighbor pixels. The value of this pixels can be interpolated therefore interpolated. Performance Metrics Image sensors are commonly compared using 3 particular metrics: 1. pixel count: total number of pixels (NxW); often measured in megapixels. Pixel counts is an important metric, however, image sensors with the same number of pixels but with different size can result in different quality. Larger sensors produces images with better resolution. 2. lens quality: resolution, distortion, dispersion. ​ 3. dynamic range: the range of luminosity that can be reproduced accurately. ​ Pi Camera The Pi Camera is a Raspberry Pi 5 megapixels camera module capable to record 1080p video and still images (Figure 10: http://www.adafruit.com/images/1200x900/1367-00.jpg). The Raspberry Pi (Rpi) is ​ some sort of Arduino's next level. It's like a small computer running on linux that has digital inputs and outputs. Differently from the Arduino, the Rpi does not have analog I/O. The Pi Camera module connects directly to the Rpi Camera Serial Interface using a ribbon cable. The board itself is very small (25 x 20 x 9 mm) and weighs about 3 g. Differently from more sophisticated cameras, the module has a fixed focus lens onboard (although this focus can be carefully changed). The Rpi combined with the camera module has several applications: from rapid prototyping of Internet of Things (IOTs) devices up to time-lapse devices. For instance, I've been using this sensor to build an smart garbage can. The e-can is a regular trash container equipped with the Rpi and the Pi Camera. The goal is to take pictures of garbage and try to classify them in real-time (e.g. plastic vs. metal). The Rpi is quite cheap: it costs about 35 USD (you can buy it here: https://www.sparkfun.com/products/11868). Something very important to understand is ​ that the Pi Camera is an integrated circuit designed to be smoothly connected to the Rpi. However, the particular image sensor used in the Pi Camera is a 1/4" color CMOS QSXGA image sensor manufactured by OmniVision. The datasheet (http://cdn.sparkfun.com/datasheets/Dev/RaspberryPi/ov5647_full.pdf) its a very large ​ ​ pdf that describes the sensor thoroughly. In this blog post, I'll review the general aspects of this datasheet. First I'll start with the sensor features (Figure 11). Even though there is a long list of features, I'll comment just on 3: 1. 1.4x1.4 um pixel. The datasheet indicates that the sensors a OmniBSI technology for high performance (high sensitivity, low noise). 2. It has automatic image control functions like: Automatic Exposure Control (AEC), Automatic White Balance (AWB) and others. 3. Support for output formats: 8-10 bit RGB data. As we learned previously in this blog, CMOS final output is a serial analog signal that can be digitized. In this particular case, the output of the sensor is the digitized data. Another important information presented in the datasheet are the key specifications of the sensor (Figure 12). From this list we can learn that there are 2592x1944 active sensors cells (5 megapixels). We see that the core of the sensor works with 1.5V and it has an embedded voltage regulator, while the analog electronics work with 3V and digital I/O uses 1.7-3V.
Recommended publications
  • Invention of Digital Photograph
    Invention of Digital photograph Digital photography uses cameras containing arrays of electronic photodetectors to capture images focused by a lens, as opposed to an exposure on photographic film. The captured images are digitized and stored as a computer file ready for further digital processing, viewing, electronic publishing, or digital printing. Until the advent of such technology, photographs were made by exposing light sensitive photographic film and paper, which was processed in liquid chemical solutions to develop and stabilize the image. Digital photographs are typically created solely by computer-based photoelectric and mechanical techniques, without wet bath chemical processing. The first consumer digital cameras were marketed in the late 1990s.[1] Professionals gravitated to digital slowly, and were won over when their professional work required using digital files to fulfill the demands of employers and/or clients, for faster turn- around than conventional methods would allow.[2] Starting around 2000, digital cameras were incorporated in cell phones and in the following years, cell phone cameras became widespread, particularly due to their connectivity to social media websites and email. Since 2010, the digital point-and-shoot and DSLR formats have also seen competition from the mirrorless digital camera format, which typically provides better image quality than the point-and-shoot or cell phone formats but comes in a smaller size and shape than the typical DSLR. Many mirrorless cameras accept interchangeable lenses and have advanced features through an electronic viewfinder, which replaces the through-the-lens finder image of the SLR format. While digital photography has only relatively recently become mainstream, the late 20th century saw many small developments leading to its creation.
    [Show full text]
  • Light Conversion, S/N Characteristics of X-Ray Phosphor Screens
    Light conversion, S/N characteristics of x-ray phosphor screens Item Type text; Thesis-Reproduction (electronic) Authors Lum, Byron Kwai Chinn Publisher The University of Arizona. Rights Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author. Download date 28/09/2021 05:29:31 Link to Item http://hdl.handle.net/10150/557456 LIGHT CONVERSION, S/N CHARACTERISTICS OF X-RAY PHOSPHOR SCREENS by Byron Kwai Chinn Lum A Thesis Submitted To the Committee on COMMITTEE ON OPTICAL SCIENCES (GRADUATE) In Partial Fulfillment of the Requirements for the Degree of MASTER OF SCIENCE In the Graduate College THE UNIVERSITY OF ARIZONA 19 8 0 STATEMENT BY AUTHOR This thesis has been submitted in partial fulfillment of re­ quirements for an advanced degree at The University of Arizona and is deposited in the University Library to be made available to borrowers under rules of the Library. Brief quotations from this thesis are allowable without special permission, provided that accurate acknowledgment of source is made. Requests for permission for extended quotation from or reproduction of this manuscript in whole or in part may be granted by the head of the major department or the Dean of the Graduate College when in his judg­ ment the proposed use of the material is in the interests of scholar­ ship. In all other instances, however, permission must be obtained from the author.
    [Show full text]
  • Overview of Camera Systems Used in Beam Instrumentation
    Beata Walasek-Höhne verview of Video Cameras used in Beam Instrumentation FAIR GmbH | GSI GmbH Outline: Taking an Image . Source of light more details: talk of E. Bravin „Transverse Profile measurements“ . Optics more details: talk of S. Gibson „Introduction to optics“ . Image sensors . Analog i. Video Tube . Solid state sensors i. CCD ii. CMOS iii. CID . Radiation hardness . Digitizer more details: talk of M. Gasior „Analog Digital Conversion“ . Post processing FAIR GmbH | GSI GmbH 2 Source of light . light is represented as both a particle (photon) and electromagnetic wave . photons have a defined energy . energy correlates to wavelength 풉풄 푬 = 흀 . wavelength corresponds to color . number of the photons corresponds to intensity . visible light is a very narrow band in the electromagnetic spectrum FAIR GmbH | GSI GmbH 3 Source of light 600mbar Kr 600mbar Kr © GSI, www.gsi.de Ruby-Ceramics (Chromox) screen at LHC for injection and first turn, protons at 450 GeV © CERN, www.cern.ch YAG:Ce at FLASH © DESY, www.desy.de FAIR GmbH | GSI GmbH 4 Source of light 600mbar Kr 100 mm © GSI, www.gsi.de FAIR GmbH | GSI GmbH 5 Analog Video Cameras © Pete Simpkin, Marconi vidicon Camera www.bbceng.info FAIR GmbH | GSI GmbH 6 Analog Video Cameras . early 1900s first experiment in image transmission . in 1930s new electronic designs based on a cathode-ray video camera tube, including two versions dissector tube (Philo Farnsworth) and iconoscope (Vladimir Zsworykin) Dissector tube © Television News magazine, 1931 FAIR GmbH | GSI GmbH 7 Analog Video Cameras . analog system became the standard in the television industry and remained in wide use until the 1980s Iconoscope © Radio News magazine, 1945 FAIR GmbH | GSI GmbH 8 Analog Video Cameras: Vidicon .
    [Show full text]
  • United States Patent [191 [11] 4,095,775 Hotham [45] Jun
    United States Patent [191 [11] 4,095,775 Hotham [45] Jun. 20, 1978 [54] PARTICLE EVALUATOR Primary Examiner-Conrad J. Clark Attorney, Agent, or Firm—Reed C. Lawlor [76] Inventor: Geoffrey A. Hotham, 1130 Channel Dr., Santa Barbara, Calif. 93108 [57] ABSTRACT [21] App1.No.: 642,915 The invention is employed for evaluating particles, such as droplets of aerosol sprays suspended in a gaseous [22] Filed: Dec. 22, 1975 medium. A stream of the particles to be analyzed is [51] Int. Cl.2 ................... .. G01N 21/00; G01N 21/18; ?owed into a narrow sample zone across the optical axis GOZB 21/34 of a beam of light. Scattered radiation from the particles [52] US. Cl. .................................. .. 356/102; 356/ 181; is focused on the image plane in which a photosensitive ' 250/574; 350/95 surface is located. Images of particles formed there are [58] Field of Search ............. 356/102, 103, 246, 181; reproduced on a ?uorescent or phosphorescent display 250/574; 350/63, 89, 95 screen of a cathode ray tube. A scale on the screen is employed for measuring the dimensions of the particles. [56] References Cited The con?nement of the stream of the mixture to about U.S. PATENT DOCUMENTS the depth of focus of the objective lens of the camera reduces effects of out-of-focus particles and permits 3,609,043 9/1971 Simmons et a1. .................. .. 356/102 evaluation of individual particles. 3,614,231 10/1971 Sham ..................... .. 356/102 3,646,352 2/ 1972 B01 et al. .. 356/ 102 3,720,470 3/ 1973 Berkham ...........................
    [Show full text]
  • Camera Sensors
    Welcome Processing Digital Camera Images Camera Sensors Michael Thomas Overview Many image sensors: Infrared, gamma ray, x-rays etc. Focus on sensors for visible light (slightly into infrared and uv light) Michael Thomas, TU Berlin, 2010 Processing Digital Camera Images, WS 2010/2011, Alexa/Eitz 2 The beginnings First Video camera tube sensors in the 1930s Cathode Ray Tube (CRT) sensor Vidicon and Plumbicon for TV-Broadcasting in the 1950s – 1980s Vidicon sensors on Galileo-spacecraft to Jupiter in 1980s Michael Thomas, TU Berlin, 2010 Processing Digital Camera Images, WS 2010/2011, Alexa/Eitz 3 The Photoelectric-Effect How to convert light to electric charge? Inner photoelectric-effect at a photodiode: Photon excites electron creating a free electron and a hole The hole moves towards the anode, the electron towards the cathode Now we have our charge! Michael Thomas, TU Berlin, 2010 Processing Digital Camera Images, WS 2010/2011, Alexa/Eitz 4 Charge-Coupled Device (CCD) Integrated circuit Array of connected capacitors (Shift register) Charge of capacitor is transfered to neighbour capacitor At the end of chain, charge is converted into voltage by charge amplifier Transfer stepped by Clock-Signal Serial charge processing Michael Thomas, TU Berlin, 2010 Processing Digital Camera Images, WS 2010/2011, Alexa/Eitz 5 CCD-Sensor Each capacitor is coupled with a photodiode All capacitors are charged parallelly Charges are transferred serially Michael Thomas, TU Berlin, 2010 Processing Digital Camera Images, WS 2010/2011, Alexa/Eitz
    [Show full text]
  • Radio Wave Engineering 16 13
    Fall Semester, 2011 UEC Exchange Program Index 1. Communication Systems 2 2. Video and Image Technologies for TV 3 3. Quality and Reliability Engineering; the Japanese Way 5 4. GO -- Playing and Computing 6 5. VLSI Devices and Technology 7 6. Fundamental Concepts of Discrete-time Signal Processing 8 7. Computer Algorithms 10 8. Interactive Computer Graphics 12 9. Visual Media Design 13 10. Experimental Electronics Laboratory 14 11. Computational Complexity 15 12. Radio Wave Engineering 16 13. Communication Systems Laboratory 17 14. Terrestrial Electromagnetic Environment 18 1 1. Communication Systems Lecturer Professor Noboru TOYAMA Course Description This course must be taken concurrently with the course "Communication Systems Laboratory." First two classes will be review sessions that concentrate efforts on familiarizing students with the basic mathematical knowledge including the subjects listed in the prerequisites. Students who do not have confidence in those items are requested to make extra efforts to catch up with other students during the first two classes. This course together with “Communication System Laboratory” discusses in depth how digital and analog communication systems work. The basic tools used here are waveform analyses. Topics covered in this course are, signal analysis, the Fourier spectrum, the autocorrelation function, power spectrum, line coding, inter-symbol interference, roll-off filters, the discrete Fourier transform, the Hilbert transform, and various types of modulation. Some experiments in threshold effects in the presence of noise are included. From the first chapter up to chapter 7 of the textbook will be covered during the course hours. The remaining chapters will be covered in the course given in the spring semester.
    [Show full text]
  • Chapter 1 Introduction to Nondestructive Testing 1.1 I
    www.NDTKALA.net www.NDTKALA.net www.NDTKALA.net www.NDTKALA.net HANDBOOK OF NONDESTRUCTIVE EVALUATION www.NDTKALA.net This page intentionally left blank. www.NDTKALA.net HANDBOOK OF NONDESTRUCTIVE EVALUATION Charles J. Hellier McGRAW-HILL New York Chicago San Francisco Lisbon London Madrid Mexico City Milan New Delhi San Juan Seoul Singapore Sydney Toronto www.NDTKALA.net ebook_copyright 8 x 10.qxd 7/7/03 5:07 PM Page 1 Copyright © 2003 by The McGraw-Hill Companies, Inc. All rights reserved. Manufactured in the United States of America. Except as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distrib- uted in any form or by any means, or stored in a database or retrieval system, without the prior written permission of the publisher. 0-07-139947-X The material in this eBook also appears in the print version of this title: 0-07-028121-1 All trademarks are trademarks of their respective owners. Rather than put a trademark symbol after every occurrence of a trademarked name, we use names in an editorial fashion only, and to the benefit of the trademark owner, with no intention of infringement of the trademark. Where such designations appear in this book, they have been printed with initial caps. McGraw-Hill eBooks are available at special quantity discounts to use as premiums and sales promotions, or for use in cor- porate training programs. For more information, please contact George Hoare, Special Sales, at george_hoare@mcgraw- hill.com or (212) 904-4069. TERMS OF USE This is a copyrighted work and The McGraw-Hill Companies, Inc.
    [Show full text]
  • Appendixes I Spatial Filtration for Video Line Removal
    Appendixes I Spatial Filtration for Video Line Removal GORDON W. ELLIS The growing popularity in microscopy of video recording and image-processing techniques presents users with a problem that is inherent in the method-horizontal scan lines. These lines on the monitor screen can be an obtrusive distraction in photographs of the final video image. Care and understanding in making the original photograph can minimize the contrast of these lines. Two simple, but essential, rules for photography of video images are: (l) Use exposures that are multiples of the video frame time (l/30 sec in the USA). An exposure time less than this value will not record a completely interlaced image.* (2) Adjust the v HOLD control on the monitor so that the two fields that make up the frame are evenly interlaced. Alternate scan lines should be centered with respect to their neighbors (a magnifier is helpful here). t Following these rules will often result in pictures in which the scan lines are acceptably unobtrusive without recourse to further processing. If the subject matter is such that the remaining line contrast is disturbing, Inoue (l981b) has described a simple technique that can often yield satisfactory results using a Ronchi grating. However, on occasion, when important image details are near the dimensions of the scan lines, the slight loss in vertical resolution resulting from this diffraction-smoothing method may make it worth the effort to remove the lines by spatial filtration. The technique of spatial filtration, pioneered by Marechal, is described in many current optics texts. A good practical discussion of these techniques is found in Shulman (1970).
    [Show full text]
  • US4523226.Pdf
    United States Patent (19) (11 Patent Number: 4,523,226 Lipton et al. 45 Date of Patent: Jun. 11, 1985 54 STEREOSCOPICTELEVISION SYSTEM 3,818,125 6/1974 Butterfield ............................ 358/92 (75) Inventors: Lenny Lipton, Pt. Richmond; 4,152,724 5/1979 Hunter ................................ 358/901 Michael R. Starks, Fairfax; James D. FOREIGN PATENT DOCUMENTS Stewart, Oakland; Lawrence D. Meyer, Ross, all of Calif. 34046 7/1956 Switzerland .......................... 352/60 73 Assignee: Stereographics Corporation, San OTHER PUBLICATIONS Rafael, Calif. Masters-Generating a 3D-TV Signal With One TV Camera-IBM Tech. Disclosure Bulletin, vol. 8 #1 Jun. (21) Appl. No.: 459,174 1965, p. 134. 22) Filed: Jan. 19, 1983 Kennedy-Weather Satellite Picture Processor-Wireless World, May 1980-vol. 86, No. 1533, pp. 41-46. Related U.S. Application Data Norling-the Stereoscopic Art-A Reprint-Jour. of Smpte-vol. 60, #53, pp. 268-307. 63 Continuation-in-part of Ser. No. 343,124, Jan. 27, 1982, abandoned. Primary Examiner-Joseph A. Orsino, Jr. (51) Int, C. ............................................... H04N 9/54 Attorney, Agent, or Firm-J. William Wigert, Jr. 52) U.S. Cl. ...................... poes no poor wou 358/88; 358/91; 57 ABSTRACT 358/92 An improved stereoscopic television system is dis (58) Field of Search ....................... 358/88, 91, 92,97, closed, having a great deal of compatibility with the 358/146, 237,238,239,901; 352/60 existing commercial television infrastructure. Flicker is 56) References Cited eliminated while preserving the existing bandwidth allowing the stereoscopic video signals to be handled U.S. PATENT DOCUMENTS with conventional apparatus such as video tape record 2,099,694 li/1937 Land ....................................
    [Show full text]
  • Doctoral Thesis
    i “diss-davidhinken” — 2012/5/12 — 10:13 — page i — #1 i i i LUMINESCENCE-BASED CHARACTERIZATION OF CRYSTALLINE SILICON SOLAR CELLS Von der Fakultät für Mathematik und Physik der Gottfried Wilhelm Leibniz Universität Hannover zur Erlangung des Grades Doktor der Naturwissenschaften Dr. rer. nat. genehmigte Dissertation von Dipl.-Phys. David Hinken geboren am 27.11.1981 in Hannover 2012 i i i i i “diss-davidhinken” — 2012/5/12 — 10:13 — page ii — #2 i i i Referent: Prof. Dr. Rolf Brendel Korreferent: Prof. Dr. Jörg Osten Tag der Promotion: 03.05.2012 i i i i i “diss-davidhinken” — 2012/5/12 — 10:13 — page iii — #3 i i i Abstract Camera-based luminescence imaging for the characterization of silicon solar cells was introduced into photovoltaics in 2005 by a Japanese research group. For this characterization technique, excess charge carriers in the solar cell bulk are either injected using a current source or optically generated by illumination. Luminescence imaging is fast and the resulting images contain abundant information about the solar cell device because the detected luminescence signal depends on properties like the charge carrier recombination, the optical confinement as well as on the series- and shunt resistance. However, at that time, measured luminescence images were interpreted qualitatively only and it was not demonstrated, how luminescence imaging can be used to extract quantitative images of a specific solar cell parameter. Therefore, this thesis presents a comprehensive description and understanding of luminescence images of wafer-based crystalline silicon solar cells. It aims at the determination of local solar cell parameters and its relation to the global solar cell energy conversion efficiency.
    [Show full text]
  • Dictionary of Video and Television Technology Newnes Is an Imprint of Elsevier Science
    Dictionary of Video and Television Technology Newnes is an imprint of Elsevier Science. Copyright © 2002, Elsevier Science (USA). All rights reserved. [This page intentionally left blank.] No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher. Recognizing the importance of preserving what has been written, Elsevier Science prints its books on acid-free paper whenever possible. Library of Congress Cataloging-in-Publication Data ISBN: 1-878707-99-X British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library. The publisher offers special discounts on bulk orders of this book. For information, please contact: Manager of Special Sales Elsevier Science 225 Wildwood Avenue Woburn, MA 01801-2041 Tel: 781-904-2500 Fax: 781-904-2620 For information on all Newnes publications available, contact our World Wide Web home page at: http://www.newnespress.com 10 9 8 7 6 5 4 3 2 1 Printed in the United States of America Dictionary of Video and Television Technology Keith Jack Vladimir Tsatsulin An imprint of Elsevier Science Amsterdam Boston London New York Oxford Paris San Diego San Francisco Singapore Sydney Tokyo [This is a blank page.] CONTENTS Preface ............................................................................................................. vii About the Authors .....................................................................................
    [Show full text]
  • Light Path of Light Looking at Things Getting Light to “Imager” Depth And
    CS559 – Lecture 2 Light Lights, Cameras, Eyes These are course notes (not used as slides) • Electromagnetic radiation Written by Mike Gleicher, Sept. 2005 – Wavelengths between 380nm – 800nm – Wavelength vs. frequency Adjusted after class – stuff we didn’t get to removed / mistakes fixed • Particle model – Travels from source to receiver © 2005 Michael L. Gleicher Path of Light Looking at things • From source to viewer • Light leaves source • Light bounces off object • Light goes to receiver • Not known until around 1000 –Eye – Euclid and Ptolemy PROVED otherwise – Camera • Ibn Al-Haythan (Al-hazen) around 985 – Triumph of the scientific method • Receiver is 2D, process is • Proof by observation – not authority 3D – Experiment – stare at sun, burns eyes, … • Mathematics later – Also figure out light travels in straight lines • Camera first – Flat receiver Getting light to “imager” Depth and Distance • Light generally bounces off • Light travels in straight lines things in all directions – Except in weird cases that only occur in theoretical – See from any direction physics – Not the same! (mirror) – Deal with this in detail later • Doesn’t matter how far away – Can’t tell where photon comes from • Generally doesn’t matter if – Photons leaving source might not all make it to eye emitter (source) or reflector – Photons might bounce around on stuff – Same to receiver • Longer distance, more chance of hitting something 1 Capturing Images “Ideal Imaging” • Measure light at all places • Each point in world maps on the “imaging plane”? to a single
    [Show full text]