Software Simulation of Depth of Field Effects in Video from Small Aperture Cameras Jordan Sorensen

Total Page:16

File Type:pdf, Size:1020Kb

Software Simulation of Depth of Field Effects in Video from Small Aperture Cameras Jordan Sorensen Software Simulation of Depth of Field Effects in Video from Small Aperture Cameras MASSACHUSETTS INSTITUTE by OF TECHNOLOGY Jordan Sorensen AUG 2 4 2010 B.S., Electrical Engineering, M.I.T., 2009 LIBRARIES Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of ARCHNES Master of Engineering in Electrical Engineering and Computer Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY June 2010 @ Massachusetts Institute of Technology 2010. All rights reserved. Author .. Depi4nfent of Electrical Engineering and Computer Science May 21, 2010 Certified by....................................... ... Ramesh Raskar Associate Professor, MIT Media Laboratory Thesis Supervisor Accepted by. ...... ... ... ..... .... ..... .A .. Dr. Christopher YOerman Chairman, Department Committee on Graduate Theses 2 Software Simulation of Depth of Field Effects in Video from Small Aperture Cameras by Jordan Sorensen Submitted to the Department of Electrical Engineering and Computer Science on May 21, 2010, in partial fulfillment of the requirements for the degree of Master of Engineering in Electrical Engineering and Computer Science Abstract This thesis proposes a technique for post processing digital video to introduce a simulated depth of field effect. Because the technique is implemented in software, it affords the user greater control over the parameters of the effect (such as the amount of defocus, aperture shape, and defocus plane) and allows the effect to be used even on hardware which would not typically allow for depth of field. In addition, because it is a completely post processing technique and requires no change in capture method or hardware, it can be used on any video and introduces no new costs. This thesis describes the technique, evaluates its performance on example videos, and proposes further work to improve the technique's reliability. Thesis Supervisor: Ramesh Raskar Title: Associate Professor, MIT Media Laboratory 4 Acknowledgements Thanks to Ramesh Raskar for his guidance and suggestions as my thesis advisor, and for his help in brainstorming research ideas. Thanks to the rest of the Camera Culture group for allowing me a glimpse of the research and publication process, and for everything I have learned from them over the past year and a half. I would also like to thank Sharmeen Browarek for her help formatting this document and for her assistance in navigating the world of M. Eng theses at MIT. Thanks to Erik, Jeff, and all my friends for their constant support throughout my MIT career. Finally, thanks to my family for helping me survive graduate life. Curt and Laura, your visit helped me relax when I was most worried. Mom and Dad, your care pack- ages kept me well fed while I wrote this thesis and your weekly calls reminded me that someone was thinking of me. 6 Contents 1 Introduction 1.1 Motivation for Simulating Depth of Field 1.2 Proposal...................... 1.3 Goals of Method .. .. .. .. .. 1.4 Structure of Thesis . .. .. .. .. .. 2 Background on Depth of Field and View S ynthesis 2.1 Modeling a Camera . .. .. .. .. .. ..... .... ..... 19 2.1.1 Pinhole Camera Model .. .. .. 2.1.2 Finite Aperture Model . .. .. 2.1.3 Modeling a Finite Aperture Camera Using Pinhole Camera Model......... ... .. 2.2 Video Stabilization . ... .. .. .. .. 2.2.1 Re-Cinematography .. .. .. .. 2.2.2 Light Field Video Stabilization .. 2.2.3 Content Preserving Warps .. .. 2.3 Depth of Field Simulation for Still Images 2.4 Unstructured Lumigraph Rendering .. .. 2.5 Feature Point Detection . .. .. .. .. 2.6 Structure from Motion . .. .. .. .. .. 3 Method for Simulating Depth of Field 33 3.1 Overview .. .. .. .. .. .. .. .. .. 34 3.2 Feature Point Detection .. .. ... .. .. .. .. .. .. .. .. .. 34 3.3 Structure from Motion .. .. .. .. .. .. ... .. .. .. .. .. 35 3.4 Focal Plane Selection .. .. .. .. .. .. .. .. .. .. .. .. .. 36 3.5 Camera Generation . .. .. .. ... .. .. .. .. .. .. .. .. .. 36 3.6 Feature Point Projection .. ... .. .. .. ... .. .. .. ... .. 37 3.7 Image Warping .. .. .. ... .. .. .. ... .. .. ... .. .. 38 3.8 Image Matching . ... .. ... .. ... .. ... .. ... .. ... 39 3.9 Image Averaging . ... .. .. .. ... .. .. .. ... .. .. .. 40 4 Results of Simulation Method 41 4.1 Hardware Used .. .. ... ... ... .. ... ... ... .. ... 41 4.2 Test Videos .. ... .. ... .. ... .. ... .. ... .. ... .. 41 4.3 Feature Point Detection . .. ... .. ... .. ... .. ... .. .. 42 4.3.1 Voodoo...... ... .. ... .. ... ... .. ... 42 4.3.2 Voodoo v Bundler .. ... .. ... ... .. ... .. ... 44 4.4 Structure from Motion . ... .. .. ... .. .. .. ... .. .. .. 48 4.4.1 Voodoo .. .. .. ... .. .. .. .. .. .. .. .. ... .. 50 4.4.2 B undler .. .. ... .. .. ... .. .. .. ... .. .. ... 51 4.5 Feature Point Projection . ... ... ... ... ... ... .... .. 52 4.6 Image Warping .. .. .. ... .. .. .. ... .. .. ... .. .. 55 4.7 Image Matching and Averaging . ... .. ... ... ... ... .. 56 4.8 Final Video . .. ... .. .. .. ... .. .. ... .. .. .. ... 59 5 Further Work 61 5.1 Using Features Points from Other Frames ... .. ... ... ... 61 5.2 Better Triangulation Method . .. ... ... ... ... .. ... ... 63 5.3 Content Preserving Warps .... ... ... ... ... ... ... .. 65 5.4 Generalized View Synthesis Technique..... ... .. .. ... .. 66 5.5 Recinematography...... .... .... .... ... ... .. 66 5.6 Bokeh Effects .. .. .. .. .. ... .. .. .. .. .. .. .. .. .. 67 8 6 Conclusion 10 List of Figures 1-1 Can a cheap consumer camera, like the one on the left, capture video featuring advanced effects like the expensive professional camera on the right? . ... .. ... .. ... .. ... ... .. ... .. ... 16 2-1 Pinhole camera capturing point. ... ... .. ... .. ... .. ... 20 2-2 Finding point's image location using similar triangles. ... .. ... 20 2-3 Increasing the size of the aperture to a finite area. ... .. ... .. 21 2-4 A lens focuses light rays at a point. ... .. ... .. ... .. ... 22 2-5 An image captured using a large aperture (bottom) together with the same image captured using a tiny aperture .. ... .... ... ... 23 2-6 A lens can be modeled as a sum of pinhole cameras. .. ... .. .. 23 2-7 Profusion 25, a light field camera. Image courtesy Point Grey, [7] . 26 3-1 Feature points detected in one frame of test video. ... ... ... 34 3-2 Structure from motion provides 3D feature point locations and camera inform ation.. .... ... ... ... .... ... ... ... 35 3-3 The user selectes four feature points on an object they would like to stay in focus......... ..... ..... ... ... .... ... 36 3-4 Generated virtual camera positions (dots) surrounding actual camera position (plus sign). .. ... .... ... ... .... ... ... ... 36 3-5 Original feature points marked with dots; projected points with crosses. Projected points are points as seen by a virtual camera slightly to left of actual cam era. ... .... ... ... .... ... ... .... 37 3-6 Image is triangulated. The points will then be warped to their pro- jected locations, marked by crosses. .. ... ... .. ... ... .. 39 3-7 Final averaged result of one frame of test video. ... .... ... .. 40 4-1 Feature points for three sample frames of lab space video using three detection schemes with Voodoo. .... .... .... ..... .... 43 4-2 Histograms showing how many frames registered x feature points for each of the three methods. .... ..... .... .... .... .. 43 4-3 Feature points for three sample frames of lab space video using Bundler. 44 4-4 Feature points for three sample frames of lab space video using Voodoo. 45 4-5 Histogram showing how many frames registered X feature points using B undler. .... .... .... ..... .... .... ..... ... 45 4-6 Histogram showing how many frames, registered X feature points using Voodoo. ..... ... .. ... .m.. ... ... .q. ... .. 4-7 Feature points for three sample fra mes of cake box sequence using Bundler. .. 4-8 Feature points for three sample fra mes of cake box sequence using Voodoo. ..... .r... ... .. .. ... .. 4-9 Histogram showing how many frames registered X feature points using Bundler. ...... 4-10 Histogram showing how many frames registered X feature points using Voodoo. ... ... .... .... 4-11 Cake box sequence setup. The box labeled "A" is a red and yellow cake box, and is 18 cm tall. The box labeled "B" is a brown and yellow brownie box, and is 19 cm tall.... ... .... ... .... 4-12 Single frame from cake box sequence. Feature points registered by Voodoo are marked with white dots. Selected feature points used for measurement are marked with white squares with black edges..... 4-13 Estimated 3D coordinates of one frame's feature points using Voodoo structure from motion technique. Scene is viewed from overhead. Se- lected feature points are again marked with white squares with black edges. Feature points from scene background are cropped from view. 50 4-14 Single frame from cake box sequence. Feature points registered by Bundler are marked with white dots. Selected feature points used for measurement are marked with white squares with black edges. ..... 51 4-15 Estimated 3D coordinates of one frame's feature points using Bundler structure from motion technique. Scene is viewed from overhead. Se- lected feature points are again marked with white squares with black edges. Feature points from scene background are cropped from view. 52 4-16 One frame from media lab video marked with actual feature point locations and locations projected from 3D coordinates of each feature point. The original feature point is marked with a '+', the projected 3D coordinates are marked with
Recommended publications
  • Understanding Jitter and Wander Measurements and Standards Second Edition Contents
    Understanding Jitter and Wander Measurements and Standards Second Edition Contents Page Preface 1. Introduction to Jitter and Wander 1-1 2. Jitter Standards and Applications 2-1 3. Jitter Testing in the Optical Transport Network 3-1 (OTN) 4. Jitter Tolerance Measurements 4-1 5. Jitter Transfer Measurement 5-1 6. Accurate Measurement of Jitter Generation 6-1 7. How Tester Intrinsics and Transients 7-1 affect your Jitter Measurement 8. What 0.172 Doesn’t Tell You 8-1 9. Measuring 100 mUIp-p Jitter Generation with an 0.172 Tester? 9-1 10. Faster Jitter Testing with Simultaneous Filters 10-1 11. Verifying Jitter Generator Performance 11-1 12. An Overview of Wander Measurements 12-1 Acronyms Welcome to the second edition of Agilent Technologies Understanding Jitter and Wander Measurements and Standards booklet, the distillation of over 20 years’ experience and know-how in the field of telecommunications jitter testing. The Telecommunications Networks Test Division of Agilent Technologies (formerly Hewlett-Packard) in Scotland introduced the first jitter measurement instrument in 1982 for PDH rates up to E3 and DS3, followed by one of the first 140 Mb/s jitter testers in 1984. SONET/SDH jitter test capability followed in the 1990s, and recently Agilent introduced one of the first 10 Gb/s Optical Channel jitter test sets for measurements on the new ITU-T G.709 frame structure. Over the years, Agilent has been a significant participant in the development of jitter industry standards, with many contributions to ITU-T O.172/O.173, the standards for jitter test equipment, and Telcordia GR-253/ ITU-T G.783, the standards for operational SONET/SDH network equipment.
    [Show full text]
  • Motion Denoising with Application to Time-Lapse Photography
    IEEE Computer Vision and Pattern Recognition (CVPR), June 2011 Motion Denoising with Application to Time-lapse Photography Michael Rubinstein1 Ce Liu2 Peter Sand Fredo´ Durand1 William T. Freeman1 1MIT CSAIL 2Microsoft Research New England {mrub,sand,fredo,billf}@mit.edu [email protected] Abstract t Motions can occur over both short and long time scales. We introduce motion denoising, which treats short-term x changes as noise, long-term changes as signal, and re- Input renders a video to reveal the underlying long-term events. y We demonstrate motion denoising for time-lapse videos. One of the characteristics of traditional time-lapse imagery t is stylized jerkiness, where short-term changes in the scene x t (Time) appear as small and annoying jitters in the video, often ob- x Motion-denoised fuscating the underlying temporal events of interest. We ap- ply motion denoising for resynthesizing time-lapse videos showing the long-term evolution of a scene with jerky short- term changes removed. We show that existing filtering ap- proaches are often incapable of achieving this task, and present a novel computational approach to denoise motion without explicit motion analysis. We demonstrate promis- InputDisplacement Result ing experimental results on a set of challenging time-lapse Figure 1. A time-lapse video of plants growing (sprouts). XT sequences. slices of the video volumes are shown for the input sequence and for the result of our motion denoising algorithm (top right). The motion-denoised sequence is generated by spatiotemporal rear- 1. Introduction rangement of the pixels in the input sequence (bottom center; spa- tial and temporal displacement on top and bottom respectively, fol- Short-term, random motions can distract from the lowing the color coding in Figure 5).
    [Show full text]
  • MANUAL Set-Up and Operations Guide Glidecam Industries, Inc
    GLIDECAM HD-SERIES 1000/2000/4000 MANUAL Set-up and Operations Guide Glidecam Industries, Inc. 23 Joseph Street, Kingston, MA 02364 Customer Service Line 1-781-585-7900 Manufactured in the U.S.A. COPYRIGHT 2013 GLIDECAM INDUSTRIES, INC. ALL RIGHTS RESERVED PLEASE NOTE Since the Glidecam HD-2000 is essentially the same as the HD-1000 and the HD-4000, this manual only shows photographs of the Glidecam HD-2000 being setup and used. The Glidecam HD-1000 and the HD-4000 are just smaller and larger versions of the HD-2000. When there are important differences between the HD-2000 and the HD-1000 or HD-4000 you will see it noted with a ***. Also, the words HD-2000 will be used for the most part to include the HD-1000 and HD-4000 as well. 2 TABLE OF CONTENTS SECTION # PAGE # 1. Introduction 4 2. Glidecam HD-2000 Parts and Components 6 3. Assembling your Glidecam HD-2000 10 4. Attaching your camera to your Glidecam HD-Series 18 5. Balancing your Glidecam HD-2000 21 6. Handling your Glidecam HD-2000 26 7. Operating your Glidecam HD-2000 27 8. Improper Techniques 29 9. Shooting Tips 30 10. Other Camera Attachment Methods 31 11. Professional Usage 31 12. Maintenance 31 13. Warning 32 14. Warranty 32 15. Online Information 33 3 #1 INTRODUCTION Congratulations on your purchase of a Glidecam HD-1000 and/or Glidecam HD-2000 or Glidecam HD-4000. The amazingly advanced and totally re-engineered HD-Series from Glidecam Industries represents the top of the line in hand-held camera stabilization.
    [Show full text]
  • DISTRIBUTED RAY TRACING – Some Implementation Notes Multiple Distributed Sampling Jitter - to Break up Patterns
    DISTRIBUTED RAY TRACING – some implementation notes multiple distributed sampling jitter - to break up patterns DRT - theory v. practice brute force - generate multiple rays at every sampling opportunity alternative: for each subsample, randomize at each opportunity DRT COMPONENTS anti-aliasing and motion blur supersampling - in time and space: anti-aliasing jitter sample in time and space: motion blur depth of field - sample lens: blurs shadows – sample light source: soft shadows reflection – sample reflection direction: rough surface transparency – sample transmission direction: translucent surface REPLACE CAMERA MODEL shift from pinhole camera model to lens camera model picture plane at -w, not +w camera position becomes lens center picture plane is behind 'pinhole' negate u, v, w, trace ray from pixel to camera ANTI-ALIASING: ORGANIZING subpixel samples Options 1. Do each subsample in raster order 2. do each pixel in raster order, do each subsample in raster order 3. do each pixel in raster order, do all subsamples in temporal order 4. keep framebuffer, do all subsamples in temporal order SPATIAL JITTERING for each pixel 200x200, i,j for each subpixel sample 4x4 s,t JITTERED SAMPLE jitter s,t MOTION BLUR – TEMPORAL JITTERING for subsample get delta time from table jitter delta +/- 1/2 time division move objects to that instant in time DEPTH OF FIELD generate ray from subsample through lens center to focal plane generate random sample on lens disk - random in 2D u,v generate ray from this point to focal plane point VISIBILITY - as usual intersect ray with environment find first intersection at point p on object o with normal n SHADOWS generate random vector on surface of light - random on sphere REFLECTIONS computer reflection vector generate random sample in sphere at end of R TRANSPARENCY compute transmission vector generate random sample in sphere at end of T SIDE NOTE randomize n instead of ramdomize R and T .
    [Show full text]
  • Research on Feature Point Registration Method for Wireless Multi-Exposure Images in Mobile Photography Hui Xu1,2
    Xu EURASIP Journal on Wireless Communications and Networking (2020) 2020:98 https://doi.org/10.1186/s13638-020-01695-4 RESEARCH Open Access Research on feature point registration method for wireless multi-exposure images in mobile photography Hui Xu1,2 Correspondence: [email protected] 1School of Computer Software, Abstract Tianjin University, Tianjin 300072, China In the mobile shooting environment, the multi-exposure is easy to occur due to the 2College of Information impact of the jitter and the sudden change of ambient illumination, so it is necessary Engineering, Henan Institute of to deal with the feature point registration of the multi-exposure image under mobile Science and Technology, Xinxiang 453003, China photography to improve the image quality. A feature point registration technique is proposed based on white balance offset compensation. The global motion estimation of the image is carried out, and the spatial neighborhood information is integrated into the amplitude detection of the multi-exposureimageundermobilephotography,andthe amplitude characteristics of the multi-exposure image under the mobile shooting are extracted. The texture information of the multi-exposure image is compared to that of a global moving RGB 3D bit plane random field, and the white balance deviation of the multi-exposure image is compensated. At different scales, suitable white balance offset compensation function is used to describe the feature points of the multi-exposure image, the parallax analysis and corner detection of the target pixel of the multi-exposure image are carried out, and the image stabilization is realized by combining the feature registration method. The simulation results show that the proposed method has high accuracy and good registration performance for multi-exposure image feature points under mobile photography, and the image quality is improved.
    [Show full text]
  • Cinematography
    CINEMATOGRAPHY ESSENTIAL CONCEPTS • The filmmaker controls the cinematographic qualities of the shot – not only what is filmed but also how it is filmed • Cinematographic qualities involve three factors: 1. the photographic aspects of the shot 2. the framing of the shot 3. the duration of the shot In other words, cinematography is affected by choices in: 1. Photographic aspects of the shot 2. Framing 3. Duration of the shot 1. Photographic image • The study of the photographic image includes: A. Range of tonalities B. Speed of motion C. Perspective 1.A: Tonalities of the photographic image The range of tonalities include: I. Contrast – black & white; color It can be controlled with lighting, filters, film stock, laboratory processing, postproduction II. Exposure – how much light passes through the camera lens Image too dark, underexposed; or too bright, overexposed Exposure can be controlled with filters 1.A. Tonality - cont Tonality can be changed after filming: Tinting – dipping developed film in dye Dark areas remain black & gray; light areas pick up color Toning - dipping during developing of positive print Dark areas colored light area; white/faintly colored 1.A. Tonality - cont • Photochemically – based filmmaking can have the tonality fixed. Done by color timer or grader in the laboratory • Digital grading used today. A scanner converts film to digital files, creating a digital intermediate (DI). DI is adjusted with software and scanned back onto negative 1.B.: Speed of motion • Depends on the relation between the rate at which
    [Show full text]
  • Dr. Katie Bird Curriculum Vitae, Sept 2019
    Dr. Katie Bird Curriculum Vitae, Sept 2019 Department of Communication University of Texas – El Paso 301 Cotton Memorial El Paso, TX 79968 kebird[at]utep.edu EDUCATION Ph.D. Film and Media Studies, Department of English. University of Pittsburgh. August, 2018 Dissertation: “‘Quiet on Set!: Craft Discourse and Below-the-Line Labor in Hollywood, 1919- 1985” Committee: Mark Lynn Anderson (chair), Adam Lowenstein, Neepa Majumdar, Randall Halle, Daniel Morgan (University of Chicago), Dana Polan (New York University) Fields: Filmmaking, Media Industries, Technology, American Film Industry History, Studio System, Below-the-Line Production Culture, Cultural Studies, Exhibition/Institutional History, Labor History, Film Theory M.A. Literary and Cultural Studies, Department of English, Carnegie Mellon University, 2010 Thesis length project: “Postwar Movie Advertising in Exhibitor Niche Markets: Pittsburgh’s Art House Theaters, 1948-1968” B.A. Film Production, School of Film and Television, Loyola Marymount University, 2007 B.A. Creative Writing, English Department, Loyola Marymount University, 2007 PROFESSIONAL APPOINTMENTS 2019 TT Assistant Professor, Film Studies and Digital Media Production. Department of Communication. University of Texas, El Paso (UTEP) 2018 Visiting Lecturer, Film and Media Studies/Filmmaking. Department of English. University of Pittsburgh 2017 Digital Media Learning Coordinator, Visiting Instructor. Department of English. University of Pittsburgh PUBLICATIONS 2021 Forthcoming. “Sporting Sensations: Béla Balázs and the Bergfilm Camera Operator.” Bird 1 Journal of Cinema and Media Studies/Cinema Journal. Spring 2021. 2020 Forthcoming. “Steadicam Style, 1972-1985” [In]Transition. Spring 2020. 2018 “The Editor’s Face on the Cutting Room Floor: Fredrick Y. Smith’s Precarious Promotion of the American Cinema Editors, 1942-1977.” The Spectator (special issue: “System Beyond the Studios,” guest edited by Luci Marzola) 38, no.
    [Show full text]
  • Bill Mcclelland [email protected] (858)883-4078 IATSE, SOC, SOA
    Bill McClelland [email protected] www.sunsetproductionstudios.com (858)883-4078 IATSE, SOC, SOA Film Position Company/Director Shooting Christmas B Camera/Steadicam Hallmark/Steven Monroe Finding Steve Mcqueen C Camera FSMQ LLC/Mark Steven Johnson Dog Years A Camera/Steadicam Dog Years LLC/Adam Rifkin Media Steadicam Media 100/Craig Ross The Bombing A Cam/Steadi/Underwater Atom Studios/Xian Zou Fist Fight C Camera S&K Pictures/Ritchie Keen Directors Cut A Camera/Steadicam Op Make Penn Bad/Adam Rifkin Obsessed B Camera/Steadicam Op Lifetime/Anthony DiBlasi Mercenaries B Camera/Steadicam Op Tiki Terrors/Chris Ray Dead Water B Camera/Steadicam Op Tiki Terrors/James Bressack Displacement DP/Steadicam/Camera Maderfilm Productions/Ken Mader T-Minus Steadicam Operator Tiki Terrors/Chris Ray Android Cop Steadicam Operator Tiki Terrors/Mark Atkins Holiday Road Trip Steadicam Operator Monogram Pictures/Fred Olen Ray Maul Dogs Steadicam Operator AZ Film/Ali Zamani Dead Sea DP/Steadicam/Camera Dead Sea Films/ Brandon Slagle House of Secrets Steadicam Operator Monogram Pictures/Fred Olen Ray Nightcrawler Steadicam Operator Russell Appling Holiday Heist Camera/Steadicam Taut Productions/Joe Lawson Atlantic Rim B Camera / Steadicam Tiki Terrors / Jerod Cohen Meet The Cleavers Steadicam Operator Taut Productions/HM Coakley Mission 26: Endeavor Steadicam / Camera Operator California Science Center / Haley Jackson I Love You Steadicam Operator James Love / Lucas Colombo Bigfoot Steadicam Operator Asylum Pictures/Bruce Davidson Spreading Darkness Steadicam
    [Show full text]
  • The Phenomenological Aesthetics of the French Action Film
    Les Sensations fortes: The phenomenological aesthetics of the French action film DISSERTATION Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Matthew Alexander Roesch Graduate Program in French and Italian The Ohio State University 2017 Dissertation Committee: Margaret Flinn, Advisor Patrick Bray Dana Renga Copyrighted by Matthew Alexander Roesch 2017 Abstract This dissertation treats les sensations fortes, or “thrills”, that can be accessed through the experience of viewing a French action film. Throughout the last few decades, French cinema has produced an increasing number of “genre” films, a trend that is remarked by the appearance of more generic variety and the increased labeling of these films – as generic variety – in France. Regardless of the critical or even public support for these projects, these films engage in a spectatorial experience that is unique to the action genre. But how do these films accomplish their experiential phenomenology? Starting with the appearance of Luc Besson in the 1980s, and following with the increased hybrid mixing of the genre with other popular genres, as well as the recurrence of sequels in the 2000s and 2010s, action films portray a growing emphasis on the importance of the film experience and its relation to everyday life. Rather than being direct copies of Hollywood or Hong Kong action cinema, French films are uniquely sensational based on their spectacular visuals, their narrative tendencies, and their presentation of the corporeal form. Relying on a phenomenological examination of the action film filtered through the philosophical texts of Maurice Merleau-Ponty, Paul Ricoeur, Mikel Dufrenne, and Jean- Luc Marion, in this dissertation I show that French action cinema is pre-eminently concerned with the thrill that comes from the experience, and less concerned with a ii political or ideological commentary on the state of French culture or cinema.
    [Show full text]
  • A Low-Cost Flash Photographic System for Visualization of Droplets
    Journal of Imaging Science and Technology R 62(6): 060502-1–060502-9, 2018. c Society for Imaging Science and Technology 2018 A Low-Cost Flash Photographic System for Visualization of Droplets in Drop-on-Demand Inkjet Huicong Jiang, Brett Andrew Merritt, and Hua Tan School of Engineering and Computer Science, Washington State University Vancouver, 14204 NE Salmon Creek Ave, Vancouver, WA 98686, USA E-mail: [email protected] the extreme, it tries to reduce the flash time of the light source Abstract. Flash photography has been widely used to study which is much easier than improving the shutter speed of the droplet dynamics in drop-on-demand (DoD) inkjet due to its distinct advantages in cost and image quality. However, the a camera and can be compatible with most of the cameras. whole setup, typically comprising the mounting platform, flash Although only one image can be taken from an event through light source, inkjet system, CCD camera, magnification lens and this method, by leveraging the high reproducibility of the pulse generator, still costs tens of thousands of dollars. To reduce DoD inkjet and delaying the flash, a sequence of images the cost of visualization for DoD inkjet droplets, we proposed to replace the expensive professional pulse generator with a freezing different moments can be acquired from a series of low-cost microcontroller board in the flash photographic system. identical events and then yields a video recording the whole The temporal accuracy of the microcontroller was measured by an process. It is noteworthy that flash photography should be oscilloscope.
    [Show full text]
  • Jittered Exposures for Image Super-Resolution
    Jittered Exposures for Image Super-Resolution Nianyi Li1 Scott McCloskey2 Jingyi Yu3,1 1University of Delaware, Newark, DE, USA. [email protected] 2Honeywell ACST, Golden Valley, MN, USA. [email protected] 3ShanghaiTech University, Shanghai, China. [email protected] Abstract The process can be formulated using an observation mod- el that takes low-resolution (LR) images L as the blurred Camera design involves tradeoffs between spatial and result of a high resolution image H: temporal resolution. For instance, traditional cameras pro- vide either high spatial resolution (e.g., DSLRs) or high L = W H + n (1) frame rate, but not both. Our approach exploits the optical stabilization hardware already present in commercial cam- where W = DBM, in which M is the warp matrix (transla- eras and increasingly available in smartphones. Whereas tion, rotation, etc.), B is the blur matrix, D is the decimation single image super-resolution (SR) methods can produce matrix and n is the noise. convincing-looking images and have recently been shown Most state-of-the-art image SR methods consist of three to improve the performance of certain vision tasks, they stages, i.e., registration, interpolation and restoration (an in- are still limited in their ability to fully recover informa- verse procedure). These steps can be implemented separate- tion lost due to under-sampling. In this paper, we present ly or simultaneously according to the reconstruction meth- a new imaging technique that efficiently trades temporal ods adopted. Registration refers to the procedure of estimat- resolution for spatial resolution in excess of the sensor’s ing the motion information or the warp function M between pixel count without attenuating light or adding additional H and L.
    [Show full text]
  • Cmotion Cinefade Varind User Manual
    CINEFADE® A novel form of cinematic expression USER MANUAL 52 Cinefade User Manual Oliver Janesh Christiansen Inventor of Cinefade Welcome to a novel form of cinematic expression. Cinefade allows filmmakers to vary depth of field in a single shot at constant exposure, enabling the gradual transition between a sharp and a blurry background. In partnership with cmotion of Austria, London-based filmmaker Oliver Janesh Christiansen has developed a unique system. The in-camera effect immerses the viewer in a story or makes a client’s product stand out in a commercial, enabling a whole new form of cinematic expression and gives cinematographers an opportunity to explore the vast creative potential of a variable depth of field. “Cinefade is a really useful and very subtle tool to use in moments of extreme drama. A way of immersing an audience inside the mind of a character during a pivotal moment.” – Christopher Ross BSC Cinefade uses a cmotion cPRO lens control system to vary iris diameter, changing depth of field. The Cinefade VariND filter sits inside a matte box and automatically keeps exposure constant by slaving the filter motor to the iris motor. The Cinefade VariND is also a practical tool and can be used as a simple variable ND filter or RotaPola that is easy to mount. Filmmakers can remotely change ND values whenever the camera is inaccessible and dynamically control the VariND, for example to adjust exposure during interior to exterior Steadicam shots. Since Cinefade is a new addition to the cinematic language, there is no preconception of what to do and we look forward to seeing how you will use this novel form of cinematic expression.
    [Show full text]