Computational Photography Research at Stanford

Total Page:16

File Type:pdf, Size:1020Kb

Computational Photography Research at Stanford Computational photography research at Stanford January 2012 Marc Levoy Computer Science Department Stanford University Outline ! Light field photography ! FCam and the Stanford Frankencamera ! User interfaces for computational photography ! Ongoing projects • FrankenSLR • burst-mode photography • languages and architectures for programmable cameras • computational videography and cinematography • multi-bucket pixels 2 ! Marc Levoy Light field photography using a handheld plenoptic camera Ren Ng, Marc Levoy, Mathieu Brédif, Gene Duval, Mark Horowitz and Pat Hanrahan (Proc. SIGGRAPH 2005 and TR 2005-02) Light field photography [Ng SIGGRAPH 2005] ! Marc Levoy Prototype camera Contax medium format camera Kodak 16-megapixel sensor Adaptive Optics microlens array 125µ square-sided microlenses 4000 ! 4000 pixels ÷ 292 ! 292 lenses = 14 ! 14 pixels per lens Digital refocusing ! ! ! 2010 Marc Levoy Example of digital refocusing ! 2010 Marc Levoy Refocusing portraits ! 2010 Marc Levoy Light field photography (Flash demo) ! 2010 Marc Levoy Commercialization • trades off (excess) spatial resolution for ability to refocus and adjust the perspective • sensor pixels should be made even smaller, subject to the diffraction limit, for example: 36mm ! 24mm ÷ 2.5µ pixels = 266 Mpix 20K ! 13K pixels 4000 ! 2666 pixels ! 20 ! 20 rays per pixel or 2000 ! 1500 pixels ! 3 ! 3 rays per pixel = 27 Mpix ! Marc Levoy Camera 2.0 Marc Levoy Computer Science Department Stanford University The Stanford Frankencameras Frankencamera F2 Nokia N900 “F” ! facilitate research in experimental computational photography ! for students in computational photography courses worldwide ! proving ground for plugins and apps for future cameras 14 ! 2010 Marc Levoy CS 448A - Computational Photography 15 ! 2010 Marc Levoy CS 448 projects M. Culbreth, A. Davis!! ! ! A phone-to-phone 4D light field fax machine J. Koberstein, A. Mattos!! ! Motion blur detection as a game controller N. Sakunkoo!! ! ! ! ! ! Camera-assisted self portraits on mobile devices C. DuHadway!! ! ! ! ! ! Driving/parking assistive mosaics using car cameras C.-H. Chen, S.H. Park!!!High dynamic range (HDR) viewfinding V. Chandrasekhar, G. Takacs!High dynamic range imaging on the Nokia N95 L. Kim, S. Que !! ! !User interfaces for adjusting focus and tone A.Y.L. Chan, S. Meador!! !Photography using circular/hemispherical ring flash P. Huang, Z. Zheng !! ! Image deblurring from multiple images M. Smith !! ! ! ! ! ! ! Representative (non-real) thumbnails for small screens A. Medearis !!!!!!Multi-shot noise reduction for sports photography D. Johnson !! ! ! ! ! ! Synthetic aperture photography using a painted aperture D. Chan !! ! ! ! ! ! Synthetic aperture rear view mirror for safer driving 16 ! 2010 Marc Levoy Frankencamera architecture Requests Image Sensor Con!gure Application Devices Processor 1 Lens Actions Expose 2 Flash Readout Metadata + ... 3 Imaging Processor Images and Statistics Image Statistics Processing Collection 4 17 ! 2010 Marc Levoy Frankencamera software: the FCAM API Sensor sensor; Flash flash; vector<Shot> burst(2); burst[0].exposure = 1/200.; burst[1].exposure = 1/30.; Flash::FireAction fire(&flash); fire.time = burst[0].exposure/2; burst[0].actions.insert(fire); sensor.stream(burst); while (1) { Frame flashFrame = sensor.getFrame(); Frame noflashFrame = sensor.getFrame(); } 18 ! 2010 Marc Levoy Demonstration applications • Canon 430EX (smaller flash) strobed continuously • Canon 580EX (larger flash) fired once at end of exposure 19 ! 2010 Marc Levoy Fcam is being actively ported to... ! Android (at least by NVIDIA) ! iOS? ! Nokia/Microsoft? ! Android point-and-shoot camera? 20 ! 2010 Marc Levoy Prospects for computational photography ! the megapixel wars are over (and it’s about time) ! computational photography is the next battleground in the camera industry (it’s already starting) 21 ! Marc Levoy Premise: available computing power in cameras is rising faster than megapixels 12 70 9 52.5 6 35 megapixels 3 17.5 billions of pixels / second of pixels billions 0 0 2005 2006 2007 2008 2009 2010 Avg megapixels in new cameras, CAGR = 1.2 NVIDIA GTX texture fill rate, CAGR = 1.8 (CAGR for Moore’s law = 1.5) ! this “headroom” permits more computation per pixel, 22 or more frames per second, or less custom hardware ! Marc Levoy Interfaces for computational photography ! the megapixel wars are over (long overdue) ! computational photography is the next battleground in the camera industry (it’s already starting) ! how will these features appear to consumers? • standard and invisible • standard and visible (and disable-able) • aftermarket plugins and apps for your camera 23 ! Marc Levoy Interfaces for computational photography ! the megapixel wars are over (long overdue) ! computational photography is the next battleground in the camera industry (it’s already starting) ! how will these features appear to consumers? • standard and invisible • standard and visible (and disable-able) • aftermarket plugins and apps for your camera ! but aren’t cameras too complicated already? • won’t computational photography make this worse? 24 ! Marc Levoy Sony NEX-5 ! manual is 159 pages long • and full of grammatical mistakes ! scene modes are not explained well • Sunset: Shoots the red of the sunset beautifully. (page 43) ! user interface is difficult to navigate 1 8 2 9 3 10 4 11 5 12 6 13 7 25 ! Marc Levoy ProHDR for the iPhone 4S " 1 " " 2 " ! most popular HDR capture method on the market ! app developers are making photography easy again! 26 ! Marc Levoy Free 27 Free 28 Free 29 Free circa 500,000 downloads in 2011 30 iPhone 4, single HD video frame 31 SynthCam, align & average ~30 frames SNR increases as sqrt(# of frames) 32 SynthCam Canon 5DII, 1/400 sec Canon 5DII, average of 15 shots of 1/6 sec (Marc Levoy) (Marc Levoy) Removing foreground objects by translating the camera • align the shots • match histograms • apply median filter ! 2010 Marc Levoy Ongoing projects ! Frankencamera F3 (“FrankenSLR”) • SLR-quality sensor, but big, heavy, and expensive ($10K) Micron MT9001 Cypress LUPA 4000 • cell phone quality • DSLR quality • $150 • $1500 • arbitrary ROIs and non-destructive readout 39 ! 2010 Marc Levoy Ongoing projects ! Frankencamera F3 (“FrankenSLR”) • SLR-quality sensor, but big, heavy, and expensive ($10K) ! high-speed burst-mode photography • all still cameras should capture at (up to) 500 fps • capture while aiming the camera - no shutter half-press • frameless photography - ROIs, MOIs (“M” = Moment) 40 ! 2010 Marc Levoy Burst-mode photography 41 ! 2010 Marc Levoy Ongoing projects ! Frankencamera F3 (“FrankenSLR”) • SLR-quality sensor, but big, heavy, and expensive ($10K) ! high-speed burst-mode photography • all still cameras should capture at (up to) 500 fps • capture while aiming the camera - no shutter half-press • frameless photography - ROIs, MOIs (“M” = Moment) ! languages and architectures for programmable cameras • FCam was an API for camera control • now we need APIs for photographic processing and computer vision 42 ! 2010 Marc Levoy Ongoing projects ! Frankencamera F3 (“FrankenSLR”) • SLR-quality sensor, but big, heavy, and expensive ($10K) ! high-speed burst-mode photography • all still cameras should capture at (up to) 500 fps • capture while aiming the camera - no shutter half-press • frameless photography - ROIs, MOIs (“M” = Moment) ! languages and architectures for programmable cameras • FCam was an API for camera control • now we need APIs for photographic processing and computer vision ! computational videography & cinematography • stabilization, stereo, view interpolation, free-viewpoint video 43 ! 2010 Marc Levoy 3D video stabilization [Agarwala 2011] 44 (http://web.cecs.pdx.edu/~fliu/project/subspace_stabilization/demo.mp4) ! Marc Levoy Camera 3.0: multi-bucket pixels transfer gates TX1 TX2 storage nodes photodetector ! a CMOS pixel architecture with multiple accumulation nodes per pixel ! can flip between TX1 and TX2 very fast • all pixels flip at once 45 ! 2009 Marc Levoy Flash/noflash photography [Agrawal SIGGRAPH 2005] another app: TX1 TX2 HDR capture put flash exposure in middle of (with no differences in object motion ambient exposure between exposures) 46 ! 2009 Marc Levoy !"#$%&'()*$+,-'.+"/ High Dynamic Range Photography Short exposure Long exposure HDR Image 47 !"#$%&'()*$+,-'.+"/ Motion artifacts in HDR 48 !"#$%&'()*$+,-'.+"/ HDR Photography using 4-bucket pixels Images with different exposures tightly interleaved 49 !"#$%&'()*$+,-'.+"/ 4-Bucket Pixel Layout (3D View) 50 Requests Image Sensor Con!gure Application Devices Processor 1 Lens Actions Expose 2 Flash Readout Metadata + ... 3 Imaging Processor Sensor sensor; Images and Statistics Image Statistics Shot low, med, high; Processing Collection 4 low.exposure = 1/80.; med.exposure = 1/20.; high.exposure = 1/5.; sensor.capture(low); sensor.capture(med); sensor.capture(high); Frame frames[3]; frames[0] = sensor.getFrame(); frames[1] = sensor.getFrame(); frames[2] = sensor.getFrame(); fused = mergeHDR(frames); http://graphics.stanford.edu/projects/camera-2.0/.
Recommended publications
  • USER MANUAL AKASO Brave 6 Plus Action Camera
    USER MANUAL AKASO Brave 6 Plus Action Camera V1.2 CONTENTS What's in the Box 1 Your Brave 6 Plus 2 Getting Started 4 Overview of Modes 5 Customizing Your Brave 6 Plus 8 Playing Back Your Content 15 Deleting Your Content 15 Connecting to the AKASO GO app 15 Offloading Your Content 16 Maintaining Your Camera 16 Maximizing Battery Life 17 Battery Storage and Handling 17 External Microphone 18 Remote 18 Mounting Your Camera 20 Contact Us 22 WHAT'S IN THE BOX Waterproof Handle Bar/ Brave 6 Plus Housing Pole Mount Mount 1 Mount 2 Mount 3 Mount 4 Mount 5 Mount 6 Mount 7 Mount 8 Charger 1 Helmet Mounts Battery Protective Backdoor Clip 1 Clip 2 Tethers Lens Cloth USB Cable Quick Start Guide AKASO Brave 6 Plus Action Camera Remote Bandages Quick Start Guide 1 YOUR BRAVE 6 PLUS 3 1 4 5 2 6 7 8 10 9 2 11 1 Shutter/Select Button 7 USB Type-C Port 2 Power/Mode/Exit Button 8 Micro HDMI Port 3 Up/Wifi Button 9 Touch Screen 4 Down Button 10 Battery Door 5 Speaker 11 MicroSD Slot 6 Lens Note: The camera does not record sound when it is in the waterproof case. 3 GETTING STARTED Welcome to your AKASO Brave 6 Plus. To capture videos and photos, you need a microSD card to start recording (sold separately). MICROSD CARDS Please use brand name microSD cards that meet these requirements: • microSD, microSDHC or microSDXC • UHS-III rating only • Capacity up to 64GB (FAT32) Note: 1.
    [Show full text]
  • Bibliography.Pdf
    Bibliography Thomas Annen, Jan Kautz, Frédo Durand, and Hans-Peter Seidel. Spherical harmonic gradients for mid-range illumination. In Alexander Keller and Henrik Wann Jensen, editors, Eurographics Symposium on Rendering, pages 331–336, Norrköping, Sweden, 2004. Eurographics Association. ISBN 3-905673-12-6. 35,72 Arthur Appel. Some techniques for shading machine renderings of solids. In AFIPS 1968 Spring Joint Computer Conference, volume 32, pages 37–45, 1968. 20 Okan Arikan, David A. Forsyth, and James F. O’Brien. Fast and detailed approximate global illumination by irradiance decomposition. In ACM Transactions on Graphics (Proceedings of SIGGRAPH 2005), pages 1108–1114. ACM Press, July 2005. doi: 10.1145/1073204.1073319. URL http://doi.acm.org/10.1145/1073204.1073319. 47, 48, 52 James Arvo. The irradiance jacobian for partially occluded polyhedral sources. In Computer Graphics (Proceedings of SIGGRAPH 94), pages 343–350. ACM Press, 1994. ISBN 0-89791-667-0. doi: 10.1145/192161.192250. URL http://dx.doi.org/10.1145/192161.192250. 72, 102 Neeta Bhate. Application of rapid hierarchical radiosity to participating media. In Proceedings of ATARV-93: Advanced Techniques in Animation, Rendering, and Visualization, pages 43–53, Ankara, Turkey, July 1993. Bilkent University. 69 Neeta Bhate and A. Tokuta. Photorealistic volume rendering of media with directional scattering. In Alan Chalmers, Derek Paddon, and François Sillion, editors, Rendering Techniques ’92, Eurographics, pages 227–246. Consolidation Express Bristol, 1992. 69 Miguel A. Blanco, M. Florez, and M. Bermejo. Evaluation of the rotation matrices in the basis of real spherical harmonics. Journal Molecular Structure (Theochem), 419:19–27, December 1997.
    [Show full text]
  • Real-Time Burst Photo Selection Using a Light-Head Adversarial Network
    Real-time Burst Photo Selection Using a Light-Head Adversarial Network Baoyuan Wang, Noranart Vesdapunt, Utkarsh Sinha, Lei Zhang Microsoft Research Abstract. We present an automatic moment capture system that runs in real-time on mobile cameras. The system is designed to run in the viewfinder mode and capture a burst sequence of frames before and after the shutter is pressed. For each frame, the system predicts in real-time a \goodness" score, based on which the best moment in the burst can be selected immediately after the shutter is released, without any user interference. To solve the problem, we develop a highly efficient deep neural network ranking model, which implicitly learns a \latent relative attribute" space to capture subtle visual differences within a sequence of burst images. Then the overall goodness is computed as a linear aggre- gation of the goodnesses of all the latent attributes. The latent relative attributes and the aggregation function can be seamlessly integrated in one fully convolutional network and trained in an end-to-end fashion. To obtain a compact model which can run on mobile devices in real-time, we have explored and evaluated a wide range of network design choices, tak- ing into account the constraints of model size, computational cost, and accuracy. Extensive studies show that the best frame predicted by our model hit users' top-1 (out of 11 on average) choice for 64:1% cases and top-3 choices for 86:2% cases. Moreover, the model(only 0.47M Bytes) can run in real time on mobile devices, e.g.
    [Show full text]
  • Efficient Ray Tracing of Volume Data
    Efficient Ray Tracing of Volume Data TR88-029 June 1988 Marc Levay The University of North Carolina at Chapel Hill Department of Computer Science CB#3175, Sitterson Hall Chapel Hill, NC 27599-3175 UNC is an Equal Opportunity/ Affirmative Action Institution. 1 Efficient Ray Tracing of Volume Data Marc Levoy June, 1988 Computer Science Department University of North Carolina Chapel Hill, NC 27599 Abstract Volume rendering is a technique for visualizing sampled scalar functions of three spatial dimen­ sions without fitting geometric primitives to the data. Images are generated by computing 2-D pro­ jections of a colored semi-transparent volume, where the color and opacity at each point is derived from the data using local operators. Since all voxels participate in the generation of each image, rendering time grows linearly with the size of the dataset. This paper presents a front-to-back image-order volume rendering algorithm and discusses two methods for improving its performance. The first method employs a pyramid of binary volumes to encode coherence present in the data, and the second method uses an opacity threshold to adaptively terminate ray tracing. These methods exhibit a lower complexity than brute-force algorithms, although the actual time saved depends on the data. Examples from two applications are given: medical imaging and molecular graphics. 1. Introduction The increasing availability of powerful graphics workstations in the scientific and computing communities has catalyzed the development of new methods for visualizing discrete multidimen­ sional data. In this paper, we address the problem of visualizing sampled scalar functions of three spatial dimensions, henceforth referred to as volume data.
    [Show full text]
  • Volume Rendering on Scalable Shared-Memory MIMD Architectures
    Volume Rendering on Scalable Shared-Memory MIMD Architectures Jason Nieh and Marc Levoy Computer Systems Laboratory Stanford University Abstract time. Examples of these optimizations are hierarchical opacity enumeration, early ray termination, and spatially adaptive image Volume rendering is a useful visualization technique for under- sampling [13, 14, 16, 181. Optimized ray tracing is the fastest standing the large amounts of data generated in a variety of scien- known sequential algorithm for volume rendering. Nevertheless, tific disciplines. Routine use of this technique is currently limited the computational requirements of the fastest sequential algorithm by its computational expense. We have designed a parallel volume still preclude real-time volume rendering on the fastest computers rendering algorithm for MIMD architectures based on ray tracing today. and a novel task queue image partitioning technique. The combi- Parallel machines offer the computational power to achieve real- nation of ray tracing and MIMD architectures allows us to employ time volume rendering on usefully large datasets. This paper con- algorithmic optimizations such as hierarchical opacity enumera- siders how parallel computing can be brought to bear in reducing tion, early ray termination, and adaptive image sampling. The use the computation time of volume rendering. We have designed of task queue image partitioning makes these optimizations effi- and implemented an optimized volume ray tracing algorithm that cient in aparallel framework. We have implemented our algorithm employs a novel task queue image partitioning technique on the on the Stanford DASH Multiprocessor, a scalable shared-memory Stanford DASH Multiprocessor [lo], a scalable shared-memory MIMD machine. Its single address-space and coherent caches pro- MIMD (Multiple Instruction, Multiple Data) machine consisting vide programming ease and good performance for our algorithm.
    [Show full text]
  • Intro to Digital Photography.Pdf
    ABSTRACT Learn and master the basic features of your camera to gain better control of your photos. Individualized chapters on each of the cameras basic functions as well as cheat sheets you can download and print for use while shooting. Neuberger, Lawrence INTRO TO DGMA 3303 Digital Photography DIGITAL PHOTOGRAPHY Mastering the Basics Table of Contents Camera Controls ............................................................................................................................. 7 Camera Controls ......................................................................................................................... 7 Image Sensor .............................................................................................................................. 8 Camera Lens .............................................................................................................................. 8 Camera Modes ............................................................................................................................ 9 Built-in Flash ............................................................................................................................. 11 Viewing System ........................................................................................................................ 11 Image File Formats ....................................................................................................................... 13 File Compression ......................................................................................................................
    [Show full text]
  • Computational Photography
    Computational Photography George Legrady © 2020 Experimental Visualization Lab Media Arts & Technology University of California, Santa Barbara Definition of Computational Photography • An R & D development in the mid 2000s • Computational photography refers to digital image-capture cameras where computers are integrated into the image-capture process within the camera • Examples of computational photography results include in-camera computation of digital panoramas,[6] high-dynamic-range images, light- field cameras, and other unconventional optics • Correlating one’s image with others through the internet (Photosynth) • Sensing in other electromagnetic spectrum • 3 Leading labs: Marc Levoy, Stanford (LightField, FrankenCamera, etc.) Ramesh Raskar, Camera Culture, MIT Media Lab Shree Nayar, CV Lab, Columbia University https://en.wikipedia.org/wiki/Computational_photography Definition of Computational Photography Sensor System Opticcs Software Processing Image From Shree K. Nayar, Columbia University From Shree K. Nayar, Columbia University Shree K. Nayar, Director (lab description video) • Computer Vision Laboratory • Lab focuses on vision sensors; physics based models for vision; algorithms for scene interpretation • Digital imaging, machine vision, robotics, human- machine interfaces, computer graphics and displays [URL] From Shree K. Nayar, Columbia University • Refraction (lens / dioptrics) and reflection (mirror / catoptrics) are combined in an optical system, (Used in search lights, headlamps, optical telescopes, microscopes, and telephoto lenses.) • Catadioptric Camera for 360 degree Imaging • Reflector | Recorded Image | Unwrapped [dance video] • 360 degree cameras [Various projects] Marc Levoy, Computer Science Lab, Stanford George Legrady Director, Experimental Visualization Lab Media Arts & Technology University of California, Santa Barbara http://vislab.ucsb.edu http://georgelegrady.com Marc Levoy, Computer Science Lab, Stanford • Light fields were introduced into computer graphics in 1996 by Marc Levoy and Pat Hanrahan.
    [Show full text]
  • Impact with Smartphone Photography
    A Smartphone for Serious Photography? DSLR technically superior … but photo quality depends on technical skill, creative vision Smartphone cameras can produce remarkable pictures … always at ready … After all … The Best Camera is the One You Have With You Impact With Smartphone Photography 1 2 Smartphone Camera Handling To further avoid camera shake, release shutter with volume buttons …or use Apple headphones Cradle phone with both hands in “C” shape … keeps fingers away from lens … reduces camera shake 3 4 Keep Your Lens Clean Phone spends lot of time in pocket Master your or purse …grimy lens = poor image camera’s interface quality Explore …experiment with every control… Clean often with microfiber cloth … (breathe on lens to add moisture — every shooting mode no liquids) in built-in camera app Make small circles with soft pressure Set Phone Down With Camera Facing Up 5 6 Key Features: iPhone Standard Camera App AE/AF Lock Auto Burst Touch and hold finger on “Point and shoot”… automatic Take 10 photos in one second … screen … AE/AF Lock and focus and exposure perfect to capture action sun indicators appear Manual Focus High Dynamic Range (HDR) Tap a spot on screen to set focus Combines 3 different exposures to create one image with detail in both Exposure Adjust Exposure Adjust highlights and shadows Slide sun indicator up or Swipe up or down to make image down to make image brighter or darker lighter or darker Grid Grid Horizontal and vertical lines divide Helps keep horizons level, architectural lines Lock Focus & Exp (AE Lock) screen
    [Show full text]
  • An Analysis and Implementation of the HDR+ Burst Denoising Method
    Published in Image Processing On Line on 2021–05–25. Submitted on 2021–03–01, accepted on 2021–05–05. ISSN 2105–1232 c 2021 IPOL & the authors CC–BY–NC–SA This article is available online with supplementary materials, software, datasets and online demo at https://doi.org/10.5201/ipol.2021.336 2015/06/16 v0.5.1 IPOL article class An Analysis and Implementation of the HDR+ Burst Denoising Method Antoine Monod1,2, Julie Delon1, Thomas Veit2 1 MAP5, Universit´ede Paris, France ({antoine.monod, julie.delon}@u-paris.fr) 2 GoPro Technology, France ([email protected]) Abstract HDR+ is an image processing pipeline presented by Google in 2016. At its core lies a denoising algorithm that uses a burst of raw images to produce a single higher quality image. Since it is designed as a versatile solution for smartphone cameras, it does not necessarily aim for the maximization of standard denoising metrics, but rather for the production of natural, visually pleasing images. In this article, we specifically discuss and analyze the HDR+ burst denoising algorithm architecture and the impact of its various parameters. With this publication, we provide an open source Python implementation of the algorithm, along with an interactive demo. Source Code The Python implementation of HDR+ has been peer-reviewed and accepted by IPOL. The source code, its documentation, and the online demo are available from the web page of this article1. Compilation and usage instructions are included in the README.txt file of the archive. The code is also available on GitHub2. Keywords: burst denoising; computational photography; motion estimation; temporal filter- ing; high dynamic range; image signal processor 1 Introduction Image noise is a common issue in digital photography, and that issue is even more prominent in smaller sensors used in devices like smartphones or action cameras.
    [Show full text]
  • Nikon D5100: from Snapshots to Great Shots
    Nikon D5100: From Snapshots to Great Shots Rob Sylvan Nikon D5100: From Snapshots to Great Shots Rob Sylvan Peachpit Press 1249 Eighth Street Berkeley, CA 94710 510/524-2178 510/524-2221 (fax) Find us on the Web at www.peachpit.com To report errors, please send a note to [email protected] Peachpit Press is a division of Pearson Education Copyright © 2012 by Peachpit Press Senior Acquisitions Editor: Nikki McDonald Associate Editor: Valerie Witte Production Editor: Lisa Brazieal Copyeditor: Scout Festa Proofreader: Patricia Pane Composition: WolfsonDesign Indexer: Valerie Haynes Perry Cover Image: Rob Sylvan Cover Design: Aren Straiger Back Cover Author Photo: Rob Sylvan Notice of Rights All rights reserved. No part of this book may be reproduced or transmitted in any form by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher. For information on getting permission for reprints and excerpts, contact permissions@ peachpit.com. Notice of Liability The information in this book is distributed on an “As Is” basis, without warranty. While every precaution has been taken in the preparation of the book, neither the author nor Peachpit shall have any liability to any person or entity with respect to any loss or damage caused or alleged to be caused directly or indirectly by the instructions contained in this book or by the computer software and hardware products described in it. Trademarks All Nikon products are trademarks or registered trademarks of Nikon and/or Nikon Corporation. Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks.
    [Show full text]
  • Computational Photography & Cinematography
    Computational photography & cinematography (for a lecture specifically devoted to the Stanford Frankencamera, see http://graphics.stanford.edu/talks/camera20-public-may10-150dpi.pdf) Marc Levoy Computer Science Department Stanford University The future of digital photography ! the megapixel wars are over (and it’s about time) ! computational photography is the next battleground in the camera industry (it’s already starting) 2 ! Marc Levoy Premise: available computing power in cameras is rising faster than megapixels 12 70 9 52.5 6 35 megapixels 3 17.5 billions of pixels / second of pixels billions 0 0 2005 2006 2007 2008 2009 2010 Avg megapixels in new cameras, CAGR = 1.2 NVIDIA GTX texture fill rate, CAGR = 1.8 (CAGR for Moore’s law = 1.5) ! this “headroom” permits more computation per pixel, 3 or more frames per second, or less custom hardware ! Marc Levoy The future of digital photography ! the megapixel wars are over (long overdue) ! computational photography is the next battleground in the camera industry (it’s already starting) ! how will these features appear to consumers? • standard and invisible • standard and visible (and disable-able) • aftermarket plugins and apps for your camera 4 ! Marc Levoy The future of digital photography ! the megapixel wars are over (long overdue) ! computational photography is the next battleground in the camera industry (it’s already starting) ! how will these features appear to consumers? • standard and invisible • standard and visible (and disable-able) • aftermarket plugins and apps for your camera
    [Show full text]
  • Uva-DARE (Digital Academic Repository)
    UvA-DARE (Digital Academic Repository) Interactive Exploration in Virtual Environments Belleman, R.G. Publication date 2003 Link to publication Citation for published version (APA): Belleman, R. G. (2003). Interactive Exploration in Virtual Environments. General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. UvA-DARE is a service provided by the library of the University of Amsterdam (https://dare.uva.nl) Download date:30 Sep 2021 References s [1]] 3rdTech webpage, http://www.3rdtech.com/. [2]] M.J. Ackerman. Accessing the visible human project. D-Lib Magazine: The MagazineMagazine of the Digital Library Forum, October 1995. [3]] H. Afsarmanesh, R.G. Belleman, A.S.Z. Belloum, A. Benabdelkader, J.F.J, vann den Brand, G.B. Eijkel, A. Frenkel, C. Garita, D.L. Groep, R.M.A. Heeren, Z.W.. Hendrikse, L.0. Hertzberger, J.A. Kaandorp, E.C.
    [Show full text]