Computational Photography & Cinematography

Total Page:16

File Type:pdf, Size:1020Kb

Computational Photography & Cinematography Computational photography & cinematography (for a lecture specifically devoted to the Stanford Frankencamera, see http://graphics.stanford.edu/talks/camera20-public-may10-150dpi.pdf) Marc Levoy Computer Science Department Stanford University The future of digital photography ! the megapixel wars are over (and it’s about time) ! computational photography is the next battleground in the camera industry (it’s already starting) 2 ! Marc Levoy Premise: available computing power in cameras is rising faster than megapixels 12 70 9 52.5 6 35 megapixels 3 17.5 billions of pixels / second of pixels billions 0 0 2005 2006 2007 2008 2009 2010 Avg megapixels in new cameras, CAGR = 1.2 NVIDIA GTX texture fill rate, CAGR = 1.8 (CAGR for Moore’s law = 1.5) ! this “headroom” permits more computation per pixel, 3 or more frames per second, or less custom hardware ! Marc Levoy The future of digital photography ! the megapixel wars are over (long overdue) ! computational photography is the next battleground in the camera industry (it’s already starting) ! how will these features appear to consumers? • standard and invisible • standard and visible (and disable-able) • aftermarket plugins and apps for your camera 4 ! Marc Levoy The future of digital photography ! the megapixel wars are over (long overdue) ! computational photography is the next battleground in the camera industry (it’s already starting) ! how will these features appear to consumers? • standard and invisible • standard and visible (and disable-able) • aftermarket plugins and apps for your camera ! traditional camera makers won’t get it right • they’ll bury it on page 120 of the manual (like Scene Modes) • the mobile industry will get it right (indie developers will help) 5 ! Marc Levoy 6 7 8 9 Film-like Computational Photography Photography with bits Computational Camera Smart Light Digital Computational Computational Computational Computational Photography Processing Imaging/Optics Sensor Illumination Image processing Processing of a set Capture of optically Detectors that Adapting and applied to captured of captured images coded images and combine sensing Controlling images to produce to create new computational and processing to Illumination to better images. images. decoding to produce create smart Create revealing new images. pixels. image Examples: Examples: Examples: Examples: Examples: Interpolation, Filtering, Mosaicing, Matting, Coded Aperture, Artificial Retina, Flash/no flash, Enhancement, Dynamic Super-Resolution, Optical Tomography, Retinex Sensors, Lighting domes, Range Compression, Multi-Exposure HDR, Diaphanography, Adaptive Dynamic Multi-flash Color Management, Light Field from SA Microscopy, Range Sensors, for depth edges, Morphing, Hole Filling, Mutiple View, Integral Imaging, Edge Detect Chips, Dual Photos, Artistic Image Effects, Structure from Motion, Assorted Pixels, Focus of Expansion Polynomial texture Image Compression, Shape from X. Catadioptric Imaging, Chips, Motion Maps, 4D light Watermarking. Holographic Imaging. Sensors. source [Nayar, Tumblin] Content-aware image resizing [Avidan SIGGRAPH 2007] • to compress: remove pixels along lowest-energy seams, ordered using dynamic programming • to expand: insert pixels along seams that, if removed in order, would yield the original image ! 2010 Marc Levoy Content-aware image resizing [Avidan SIGGRAPH 2007] • to compress: remove pixels along lowest-energy seams, ordered using dynamic programming • to expand: insert pixels along seams that, if removed in order, would yield the original image • application to object removal Now available in Photoshop !! • extendable to video ! 2010 Marc Levoy Film-like Computational Photography Photography with bits Computational Camera Smart Light Digital Computational Computational Computational Computational Photography Processing Imaging/Optics Sensor Illumination Image processing Processing of a set Capture of optically Detectors that Adapting and applied to captured of captured images coded images and combine sensing Controlling images to produce to create new computational and processing to Illumination to better images. images. decoding to produce create smart Create revealing new images. pixels. image Examples: Examples: Examples: Examples: Examples: Interpolation, Filtering, Mosaicing, Matting, Coded Aperture, Artificial Retina, Flash/no flash, Enhancement, Dynamic Super-Resolution, Optical Tomography, Retinex Sensors, Lighting domes, Range Compression, Multi-Exposure HDR, Diaphanography, Adaptive Dynamic Multi-flash Color Management, Light Field from SA Microscopy, Range Sensors, for depth edges, Morphing, Hole Filling, Mutiple View, Integral Imaging, Edge Detect Chips, Dual Photos, Artistic Image Effects, Structure from Motion, Assorted Pixels, Focus of Expansion Polynomial texture Image Compression, Shape from X. Catadioptric Imaging, Chips, Motion Maps, 4D light Watermarking. Holographic Imaging. Sensors. source [Nayar, Tumblin] High dynamic range (HDR) imaging High dynamic range (HDR) imaging Too dark High dynamic range (HDR) imaging Too bright High dynamic range (HDR) imaging this example worked well, but... Tone mapped combination ...this one looks washed out; tone mapping is still hard to do no cameras automatically take HDR pictures (How much to bracket?) Aligning a burst of short-exposure, high-ISO shots using the Casio EX-F1 1/3 sec ! 2010 Marc Levoy Aligning a burst of short-exposure, high-ISO shots using the Casio EX-F1 burst at 60fps ! 2010 Marc Levoy Aligning a burst of short-exposure, high-ISO shots using the Casio EX-F1 burst 1/3 sec at 60fps ! 2010 Marc Levoy iPhone 4, single HD video frame 23 SynthCam, align & average ~30 frames SNR increases as sqrt(# of frames) 24 Removing foreground objects by translating the camera • align the shots • match histograms • apply median filter ! 2010 Marc Levoy Film-like Computational Photography Photography with bits Computational Camera Smart Light Digital Computational Computational Computational Computational Photography Processing Imaging/Optics Sensor Illumination Image processing Processing of a set Capture of optically Detectors that Adapting and applied to captured of captured images coded images and combine sensing Controlling images to produce to create new computational and processing to Illumination to better images. images. decoding to produce create smart Create revealing new images. pixels. image Examples: Examples: Examples: Examples: Examples: Interpolation, Filtering, Mosaicing, Matting, Coded Aperture, Artificial Retina, Flash/no flash, Enhancement, Dynamic Super-Resolution, Optical Tomography, Retinex Sensors, Lighting domes, Range Compression, Multi-Exposure HDR, Diaphanography, Adaptive Dynamic Multi-flash Color Management, Light Field from SA Microscopy, Range Sensors, for depth edges, Morphing, Hole Filling, Mutiple View, Integral Imaging, Edge Detect Chips, Dual Photos, Artistic Image Effects, Structure from Motion, Assorted Pixels, Focus of Expansion Polynomial texture Image Compression, Shape from X. Catadioptric Imaging, Chips, Motion Maps, 4D light Watermarking. Holographic Imaging. Sensors. source [Nayar, Tumblin] Stanford Multi-Camera Array [Wilburn SIGGRAPH 2005] • 640 ! 480 pixels ! • synchronized timing 30 fps ! 128 cameras • continuous streaming • flexible arrangement ! 2008 Marc Levoy Synthetic aperture photography " ! 2008 Marc Levoy Example using 45 cameras [Vaish CVPR 2004] ! 2008 Marc Levoy (movie is available at http://graphics.stanford.edu/projects/array) Light field photography using a handheld plenoptic camera Ren Ng, Marc Levoy, Mathieu Brédif, Gene Duval, Mark Horowitz and Pat Hanrahan (Proc. SIGGRAPH 2005 and TR 2005-02) Light field photography [Ng SIGGRAPH 2005] ! 2010 Marc Levoy Prototype camera Contax medium format camera Kodak 16-megapixel sensor Adaptive Optics microlens array 125µ square-sided microlenses 4000 ! 4000 pixels ÷ 292 ! 292 lenses = 14 ! 14 pixels per lens Typical image captured by camera (show here at low res) Digital refocusing ! ! ! 2010 Marc Levoy Example of digital refocusing ! 2007 Marc Levoy Example of digital refocusing ! 2007 Marc Levoy Example of digital refocusing ! 2007 Marc Levoy Example of digital refocusing ! 2007 Marc Levoy Example of digital refocusing ! 2007 Marc Levoy Refocusing portraits (movie is available at http://refocusimaging.com) ! 2007 Marc Levoy Application to sports photography ! 2010 Marc Levoy Application to sports photography ! 2010 Marc Levoy Application to sports photography ! 2010 Marc Levoy Film-like Computational Photography Photography with bits Computational Camera Smart Light Digital Computational Computational Computational Computational Photography Processing Imaging/Optics Sensor Illumination Image processing Processing of a set Capture of optically Detectors that Adapting and applied to captured of captured images coded images and combine sensing Controlling images to produce to create new computational and processing to Illumination to better images. images. decoding to produce create smart Create revealing new images. pixels. image Examples: Examples: Examples: Examples: Examples: Interpolation, Filtering, Mosaicing, Matting, Coded Aperture, Artificial Retina, Flash/no flash, Enhancement, Dynamic Super-Resolution, Optical Tomography, Retinex Sensors, Lighting domes, Range Compression, Multi-Exposure HDR, Diaphanography, Adaptive Dynamic Multi-flash Color Management, Light Field from SA Microscopy, Range Sensors, for depth edges, Morphing, Hole Filling, Mutiple View, Integral Imaging, Edge Detect Chips,
Recommended publications
  • Videography and Photography: Critical Thinking, Ethics and Knowledge-Based Practice in Visual Media
    Videography and photography: Critical thinking, ethics and knowledge-based practice in visual media Storytelling through videography and photography can form the basis for journalism that is both consequential and high impact. Powerful images can shape public opinion and indeed change the world, but with such power comes substantial ethical and intellectual responsibility. The training that prepares journalists to do this work, therefore, must meaningfully integrate deep analytical materials and demand rigorous critical thinking. Students must not only have proficient technical skills but also must know their subject matter deeply and understand the deeper implications of their journalistic choices in selecting materials. This course gives aspiring visual journalists the opportunity to hone critical-thinking skills and devise strategies for addressing multimedia reporting’s ethical concerns on a professional level. It instills values grounded in the best practices of traditional journalism while embracing the latest technologies, all through knowledge-based, hands-on instruction. Learning objectives This course introduces the principles of critical thinking and ethical awareness in nonfiction visual storytelling. Students who take this course will: Learn the duties and rights of journalists in the digital age. Better understand the impact of images and sound on meaning. Develop a professional process for handling multimedia reporting’s ethical challenges. Generate and translate ideas into ethically sound multimedia projects. Understand how to combine responsible journalism with entertaining and artistic storytelling. Think and act as multimedia journalists, producing accurate and balanced stories — even ones that take a point of view. Course design This syllabus is built around the concept of groups of students going through the complete process of producing a multimedia story and writing journal entries about ethical concerns that surface during its development and execution.
    [Show full text]
  • USER MANUAL AKASO Brave 6 Plus Action Camera
    USER MANUAL AKASO Brave 6 Plus Action Camera V1.2 CONTENTS What's in the Box 1 Your Brave 6 Plus 2 Getting Started 4 Overview of Modes 5 Customizing Your Brave 6 Plus 8 Playing Back Your Content 15 Deleting Your Content 15 Connecting to the AKASO GO app 15 Offloading Your Content 16 Maintaining Your Camera 16 Maximizing Battery Life 17 Battery Storage and Handling 17 External Microphone 18 Remote 18 Mounting Your Camera 20 Contact Us 22 WHAT'S IN THE BOX Waterproof Handle Bar/ Brave 6 Plus Housing Pole Mount Mount 1 Mount 2 Mount 3 Mount 4 Mount 5 Mount 6 Mount 7 Mount 8 Charger 1 Helmet Mounts Battery Protective Backdoor Clip 1 Clip 2 Tethers Lens Cloth USB Cable Quick Start Guide AKASO Brave 6 Plus Action Camera Remote Bandages Quick Start Guide 1 YOUR BRAVE 6 PLUS 3 1 4 5 2 6 7 8 10 9 2 11 1 Shutter/Select Button 7 USB Type-C Port 2 Power/Mode/Exit Button 8 Micro HDMI Port 3 Up/Wifi Button 9 Touch Screen 4 Down Button 10 Battery Door 5 Speaker 11 MicroSD Slot 6 Lens Note: The camera does not record sound when it is in the waterproof case. 3 GETTING STARTED Welcome to your AKASO Brave 6 Plus. To capture videos and photos, you need a microSD card to start recording (sold separately). MICROSD CARDS Please use brand name microSD cards that meet these requirements: • microSD, microSDHC or microSDXC • UHS-III rating only • Capacity up to 64GB (FAT32) Note: 1.
    [Show full text]
  • Bibliography.Pdf
    Bibliography Thomas Annen, Jan Kautz, Frédo Durand, and Hans-Peter Seidel. Spherical harmonic gradients for mid-range illumination. In Alexander Keller and Henrik Wann Jensen, editors, Eurographics Symposium on Rendering, pages 331–336, Norrköping, Sweden, 2004. Eurographics Association. ISBN 3-905673-12-6. 35,72 Arthur Appel. Some techniques for shading machine renderings of solids. In AFIPS 1968 Spring Joint Computer Conference, volume 32, pages 37–45, 1968. 20 Okan Arikan, David A. Forsyth, and James F. O’Brien. Fast and detailed approximate global illumination by irradiance decomposition. In ACM Transactions on Graphics (Proceedings of SIGGRAPH 2005), pages 1108–1114. ACM Press, July 2005. doi: 10.1145/1073204.1073319. URL http://doi.acm.org/10.1145/1073204.1073319. 47, 48, 52 James Arvo. The irradiance jacobian for partially occluded polyhedral sources. In Computer Graphics (Proceedings of SIGGRAPH 94), pages 343–350. ACM Press, 1994. ISBN 0-89791-667-0. doi: 10.1145/192161.192250. URL http://dx.doi.org/10.1145/192161.192250. 72, 102 Neeta Bhate. Application of rapid hierarchical radiosity to participating media. In Proceedings of ATARV-93: Advanced Techniques in Animation, Rendering, and Visualization, pages 43–53, Ankara, Turkey, July 1993. Bilkent University. 69 Neeta Bhate and A. Tokuta. Photorealistic volume rendering of media with directional scattering. In Alan Chalmers, Derek Paddon, and François Sillion, editors, Rendering Techniques ’92, Eurographics, pages 227–246. Consolidation Express Bristol, 1992. 69 Miguel A. Blanco, M. Florez, and M. Bermejo. Evaluation of the rotation matrices in the basis of real spherical harmonics. Journal Molecular Structure (Theochem), 419:19–27, December 1997.
    [Show full text]
  • By Patricia Holloway and Carol Mahan
    by Patricia Holloway and Carol Mahan ids and nature seem like a natural survey by the Kaiser Family Foundation, combination, but what was natu- today’s 8- to 18-year-olds spend an average ral a generation ago is different of seven hours and 38 minutes per day on today. Children are spending entertainment media such as TV, video Kless time outdoors but continue to need games, computers, and cell phones, nature for their physical, emotional, leaving little time for nature explora- and mental development. This fact has tion or much else (2010). led author Richard Louv to suggest According to Louv, children need that today’s children are suffering nature for learning and creativity from “nature-deficit disorder” (2008). (2008). A hands-on experiential Nature-deficit disorder is not a med- approach, such as nature explo- ical term, but rather a term coined ration, can have other benefits by Louv to describe the current for students, such as increased disconnect from nature. motivation and enjoyment of While there may be many learning and improved reasons today’s children are skills, including communi- not connected with nature, cation skills (Haury and a major contributor is time Rillero 1994). spent on indoor activities. According to a national Summer 2012 23 ENHANCE NATURE EXPLORATION WITH TECHNOLOGY Digital storytelling Engaging with nature Our idea was to use technology as a tool to provide or Students need to have experiences in nature in order enhance students’ connection with nature. Students to develop their stories. To spark students’ creativ- are already engaged with technology; we wanted to ity, have them read or listen to stories about nature.
    [Show full text]
  • Filming from Home Best Practices
    Instructions for Filming From Home As we continue to adapt to the current public health situation, video can be a fantastic tool for staying connected and engaging with your community. These basic guidelines will help you film compelling, high-quality videos of yourself from your home using tools you already have on hand. For more information, contact Jake Rosati in the Communications, Marketing, and Public Affairs department at [email protected]. ​ SETTING YOUR SCENE Whether you’re using a video camera, smartphone, or laptop, there are a few simple steps you can take to make sure you’re capturing clean, clear footage. You can also use these scene-setting tips to help light and frame Zoom video calls. Background ● Make sure your subject—you, in this case—is the main focus of the frame. Avoid backgrounds that are overly busy or include a lot of extraneous motion. Whenever possible, ensure that other people’s faces are not in view. ● If you’re filming in front of a flat wall, sit far enough away to avoid casting shadows behind you. ● If you have space, you can add depth to your shot by sitting or standing in front of corners. This will draw the viewer’s eye to your subject. DO add depth to your shot by filming in front of a DON’T sit too close to a flat wall; it can cast ​ ​ corner. distracting shadows. Lighting ● The best at-home lighting is daylight. Try to position yourself facing a window (or set up a light if daylight isn’t an option) with your face pointing about three-quarters of the way toward the light.
    [Show full text]
  • Real-Time Burst Photo Selection Using a Light-Head Adversarial Network
    Real-time Burst Photo Selection Using a Light-Head Adversarial Network Baoyuan Wang, Noranart Vesdapunt, Utkarsh Sinha, Lei Zhang Microsoft Research Abstract. We present an automatic moment capture system that runs in real-time on mobile cameras. The system is designed to run in the viewfinder mode and capture a burst sequence of frames before and after the shutter is pressed. For each frame, the system predicts in real-time a \goodness" score, based on which the best moment in the burst can be selected immediately after the shutter is released, without any user interference. To solve the problem, we develop a highly efficient deep neural network ranking model, which implicitly learns a \latent relative attribute" space to capture subtle visual differences within a sequence of burst images. Then the overall goodness is computed as a linear aggre- gation of the goodnesses of all the latent attributes. The latent relative attributes and the aggregation function can be seamlessly integrated in one fully convolutional network and trained in an end-to-end fashion. To obtain a compact model which can run on mobile devices in real-time, we have explored and evaluated a wide range of network design choices, tak- ing into account the constraints of model size, computational cost, and accuracy. Extensive studies show that the best frame predicted by our model hit users' top-1 (out of 11 on average) choice for 64:1% cases and top-3 choices for 86:2% cases. Moreover, the model(only 0.47M Bytes) can run in real time on mobile devices, e.g.
    [Show full text]
  • Efficient Ray Tracing of Volume Data
    Efficient Ray Tracing of Volume Data TR88-029 June 1988 Marc Levay The University of North Carolina at Chapel Hill Department of Computer Science CB#3175, Sitterson Hall Chapel Hill, NC 27599-3175 UNC is an Equal Opportunity/ Affirmative Action Institution. 1 Efficient Ray Tracing of Volume Data Marc Levoy June, 1988 Computer Science Department University of North Carolina Chapel Hill, NC 27599 Abstract Volume rendering is a technique for visualizing sampled scalar functions of three spatial dimen­ sions without fitting geometric primitives to the data. Images are generated by computing 2-D pro­ jections of a colored semi-transparent volume, where the color and opacity at each point is derived from the data using local operators. Since all voxels participate in the generation of each image, rendering time grows linearly with the size of the dataset. This paper presents a front-to-back image-order volume rendering algorithm and discusses two methods for improving its performance. The first method employs a pyramid of binary volumes to encode coherence present in the data, and the second method uses an opacity threshold to adaptively terminate ray tracing. These methods exhibit a lower complexity than brute-force algorithms, although the actual time saved depends on the data. Examples from two applications are given: medical imaging and molecular graphics. 1. Introduction The increasing availability of powerful graphics workstations in the scientific and computing communities has catalyzed the development of new methods for visualizing discrete multidimen­ sional data. In this paper, we address the problem of visualizing sampled scalar functions of three spatial dimensions, henceforth referred to as volume data.
    [Show full text]
  • Complicated Views: Mainstream Cinema's Representation of Non
    University of Southampton Research Repository Copyright © and Moral Rights for this thesis and, where applicable, any accompanying data are retained by the author and/or other copyright owners. A copy can be downloaded for personal non-commercial research or study, without prior permission or charge. This thesis and the accompanying data cannot be reproduced or quoted extensively from without first obtaining permission in writing from the copyright holder/s. The content of the thesis and accompanying research data (where applicable) must not be changed in any way or sold commercially in any format or medium without the formal permission of the copyright holder/s. When referring to this thesis and any accompanying data, full bibliographic details must be given, e.g. Thesis: Author (Year of Submission) "Full thesis title", University of Southampton, name of the University Faculty or School or Department, PhD Thesis, pagination. Data: Author (Year) Title. URI [dataset] University of Southampton Faculty of Arts and Humanities Film Studies Complicated Views: Mainstream Cinema’s Representation of Non-Cinematic Audio/Visual Technologies after Television. DOI: by Eliot W. Blades Thesis for the degree of Doctor of Philosophy May 2020 University of Southampton Abstract Faculty of Arts and Humanities Department of Film Studies Thesis for the degree of Doctor of Philosophy Complicated Views: Mainstream Cinema’s Representation of Non-Cinematic Audio/Visual Technologies after Television. by Eliot W. Blades This thesis examines a number of mainstream fiction feature films which incorporate imagery from non-cinematic moving image technologies. The period examined ranges from the era of the widespread success of television (i.e.
    [Show full text]
  • Volume Rendering on Scalable Shared-Memory MIMD Architectures
    Volume Rendering on Scalable Shared-Memory MIMD Architectures Jason Nieh and Marc Levoy Computer Systems Laboratory Stanford University Abstract time. Examples of these optimizations are hierarchical opacity enumeration, early ray termination, and spatially adaptive image Volume rendering is a useful visualization technique for under- sampling [13, 14, 16, 181. Optimized ray tracing is the fastest standing the large amounts of data generated in a variety of scien- known sequential algorithm for volume rendering. Nevertheless, tific disciplines. Routine use of this technique is currently limited the computational requirements of the fastest sequential algorithm by its computational expense. We have designed a parallel volume still preclude real-time volume rendering on the fastest computers rendering algorithm for MIMD architectures based on ray tracing today. and a novel task queue image partitioning technique. The combi- Parallel machines offer the computational power to achieve real- nation of ray tracing and MIMD architectures allows us to employ time volume rendering on usefully large datasets. This paper con- algorithmic optimizations such as hierarchical opacity enumera- siders how parallel computing can be brought to bear in reducing tion, early ray termination, and adaptive image sampling. The use the computation time of volume rendering. We have designed of task queue image partitioning makes these optimizations effi- and implemented an optimized volume ray tracing algorithm that cient in aparallel framework. We have implemented our algorithm employs a novel task queue image partitioning technique on the on the Stanford DASH Multiprocessor, a scalable shared-memory Stanford DASH Multiprocessor [lo], a scalable shared-memory MIMD machine. Its single address-space and coherent caches pro- MIMD (Multiple Instruction, Multiple Data) machine consisting vide programming ease and good performance for our algorithm.
    [Show full text]
  • Doron Kipper
    Doron Kipper [email protected] (818) 396-7484 Filmography (More Available Upon Request) Feature Films Ender’s Game 2012 - 2013 FX/Integration Coordinator Profile 2012 On-Set VFX Data Wrangler • Passionate about Filmmaking. Short Films • High standard for quality. • Driven to excel in any job, no Sinners & Saints 2011/RED Epic Key Grip matter how small. Misdirection 2010/35mm Director/Writer/Producer • Follows directions quickly with Four Winds 2010/35mm Script Supervisor attention to detail and efficiency. Onigiri 2010/Super16 Script Supervisor • Wide range of skill sets and Abduction 2009/16mm Cinematographer knowledge in areas of film, Evacua 2008/RED Script Supervisor theatre production, & technology. Brain Found 2008/16mm Cinematographer • Eager & quick to learn new skills. Shoes 2008/16mm Cinematographer/Editor • Confronts obstacles with creative Sweet Cheeks 2007/DV 1st Assistant Director solutions under extreme stress. The Warning 2004/DV Director/Writer/Editor • Works well in a collaborative Media environment. Breaking Ice (Breaking Bad Webisode - Final Season) 2012 DP (HD) • Quality work is more important Disney’s D23 Armchair Archivist - Season 2 2011 Camera/Editor than sleep. Over 40 Additional Theatrical Productions (HD/SD) 2003 -11 Videography/DVD No Good Television Promotional Spot 2010 SpecialFX Specialist Tippi Hedren/Vivica A. Fox Awards Reel (La Femme) 2008 Editor Software Proficiency Antsy McClain and The Troubadours Concert HD 2008 Asst. DP/Camera • Shotgun Big River (MET2) - Promotional Trailer 2008 Editor • Filemaker
    [Show full text]
  • EPC Exhibit 129-30.1 770 *‡Photography, Computer Art
    770 Photography,[computer[art,770[ cinemDeweyatography,iDecimaliClassification[videography 770 EPC Exhibit 129-30.1 770 *‡Photography, computer art, cinematography, videography Standard subdivisions are added for photography, computer art, cinematography, videography together; for photography alone Class here conventional photography (photography using film), digital photography Class technological photography in 621.367 See also 760 for hybrid photography 770 Photography,[computer[art,770[ cinemDeweyatography,iDecimaliClassification[videography 770 SUMMARY [770.1–.9 Standard subdivisions [771 Techniques, procedures, apparatus, equipment, materials [772 Metallic salt processes [773 Pigment processes of printing [774 Holography [776 Computer art (Digital art) [777 Cinematography and videography [778 Specific fields and special kinds of photography [779 Photographic images 770 Photography,[computer[art,770[ cinemDeweyatography,iDecimaliClassification[videography 770 [.1 *‡Philosophy and theory 770 Photography,[computer[art,770[ cinemDeweyatography,iDecimaliClassification[videography 770 [.11 *‡Inherent features Do not use for systems; class in 770.1 Including color, composition, decorative values, form, light, movement, perspective, space, style, symmetry, vision 770 Photography,[computer[art,770[ cinemDeweyatography,iDecimaliClassification[videography 770 [.2 *‡Miscellany 770 Photography,[computer[art,770[ cinemDeweyatography,iDecimaliClassification[videography 770 [.23 *‡Photography as a profession, occupation, hobby 770 Photography,[computer[art,770[
    [Show full text]
  • Intro to Digital Photography.Pdf
    ABSTRACT Learn and master the basic features of your camera to gain better control of your photos. Individualized chapters on each of the cameras basic functions as well as cheat sheets you can download and print for use while shooting. Neuberger, Lawrence INTRO TO DGMA 3303 Digital Photography DIGITAL PHOTOGRAPHY Mastering the Basics Table of Contents Camera Controls ............................................................................................................................. 7 Camera Controls ......................................................................................................................... 7 Image Sensor .............................................................................................................................. 8 Camera Lens .............................................................................................................................. 8 Camera Modes ............................................................................................................................ 9 Built-in Flash ............................................................................................................................. 11 Viewing System ........................................................................................................................ 11 Image File Formats ....................................................................................................................... 13 File Compression ......................................................................................................................
    [Show full text]