Real-Time 3D Head Position Tracker System with Stereo

Total Page:16

File Type:pdf, Size:1020Kb

Real-Time 3D Head Position Tracker System with Stereo REAL-TIME 3D HEAD POSITION TRACKER SYSTEM WITH STEREO CAMERAS USING A FACE RECOGNITION NEURAL NETWORK BY JAVIER IGNACIO GIRADO B. Electronics Engineering, ITBA University, Buenos Aires, Argentina, 1982 M. Electronics Engineering, ITBA University, Buenos Aires, Argentina 1984 THESIS Submitted as partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate College of the University of Illinois at Chicago, 2004 Chicago, Illinois ACKNOWLEDGMENTS I arrived at UIC in the winter of 1996, more than seven years ago. I had a lot to learn: how to find my way around a new school, a new city, a new country, a new culture and how to do computer science research. Those first years were very difficult for me and honestly I would not have made it if it were not for my old friends and the new ones I made at the laboratory and my colleagues. There are too many to list them all, so let me juts mention a few examples. I would like to thank my thesis committee (Thomas DeFanti, Daniel Sandin, Andrew Johnson, Jason Leigh and Joel Mambretti) for their unwavering support and assistance. They provided guidance in several areas that helped me accomplish my research goals and were very encouraging throughout the process. I would especially like to thank my thesis advisor Daniel Sandin for laying the foundation of this work and his continuous encouragement and feedback. He has been a constant source of great ideas and useful guidelines during my Thesis’ program. Thanks to Professor J. Ben-Arie for teaching me about science and computer vision. Thanks to Andy Johnson and Jason Leigh for keeping me in track, and expand my vision of sci-fi and anime. I would also like to recognize Maxine Brown and Laura Wolf who have provided me very helpful technical tips for organizing and writing several research documents, papers and this dissertation. Most at all, I would want to express my sincere gratitude to my friends, Laura Wolf, Brenda Lopez-Silva and Cristian Luciano for their help, both personally and professionally. JIG iii TABLE OF CONTENTS CHAPTER PAGE 1. INTRODUCTION.......................................................................................................................................1 1.1. BACKGROUND.........................................................................................................................................1 1.2. MOTIVATION ...........................................................................................................................................1 1.3. THE GOAL ...............................................................................................................................................2 1.4. ARTIFICIAL NEURAL NETWORKS .........................................................................................................3 1.5. THE CHALLENGE OF FACE DETECTION AND RECOGNITION.........................................................3 1.6. AN IMAGE-BASED APPROACH USING NEURAL NETWORKS ............................................................5 1.7. EVALUATION ...........................................................................................................................................8 2. BACKGROUND........................................................................................................................................17 2.1. INTRODUCTION.....................................................................................................................................17 2.2. BACKGROUND IN VR ...........................................................................................................................17 2.2.1. BRIEF INTRODUCTION OF VIRTUAL REALITY AND ITS DEVICES ........................................17 2.2.2. THE NEED AND IMPORTANCE OF TRACKER SYSTEMS IN VR..............................................18 2.3. BACKGROUND IN COMMERCIAL VR TRACKER DEVICES ...............................................................19 2.3.1. TRACKER SYSTEMS USED IN VR ENVIRONMENT ..................................................................19 2.3.2. DISADVANTAGES IN TRACKER SYSTEMS USED IN VR ENVIRONMENT ..............................23 2.3.3. REAL SPECIFICATION OF TRACKER SYSTEMS USED IN VR ENVIRONMENT .......................24 2.3.4. ADVANTAGES IN TRACKER SYSTEMS USED IN VR ENVIRONMENT.....................................26 3. PROPOSING NEW TRACKER SYSTEM...........................................................................................27 3.1. INTRODUCTION.....................................................................................................................................27 3.1.1. A WORD OF TRACKER LATENCY (OR LAG) .............................................................................27 3.1.2. WHAT IS A REAL-TIME SYSTEM?...............................................................................................29 3.2. DEFINING THE NEW TRACKER SYSTEMS SPECIFICATIONS............................................................30 3.3. SPECIFYING THE HARDWARE AND THE ENVIRONMENT................................................................32 3.4. THEORETICAL TRACKER SPECIFICAIONS USING VARRIERTM AUTOSTEREOSCOPIC DISPLAY AS A VR ENVIRONMENT ..............................................................................................................................36 4. DATA PREPARATION...........................................................................................................................41 4.1. INTRODUCTION.....................................................................................................................................41 4.2. PREPROCESSING FOR BRIGHTNESS AND CONTRAST ........................................................................41 4.2.1. STANDARD APPROACHES ...........................................................................................................41 4.2.2. THESIS APPROACH: GLOBAL EQUALIZATION IN A CONTROLLED ENVIRONMENT ..........44 4.2.3. THESIS APPROACH: GLOBAL PREPROCESSING USING SHADING CORRECTION................46 4.3. CAMERA CALIBRATION ........................................................................................................................51 5. HEAD TRACKER.....................................................................................................................................53 5.1. INTRODUCTION.....................................................................................................................................53 5.2. OVERVIEW OF TRACKING ALGORITHM .............................................................................................53 TRAINING......................................................................................................................................................71 5.2.1. METHODOLOGY DESCRIPTION.................................................................................................71 iv TABLE OF CONTENTS (continued) CHAPTER PAGE 5.2.2. ALGORITHM DESCRIPTION ........................................................................................................73 5.3. EVALUATION .........................................................................................................................................77 5.3.1. METHODOLOGY..........................................................................................................................77 5.3.2. TRACKER SYSTEM ERROR IN RECOGNIZER ONLY MODE....................................................78 5.3.3. TRACKER SYSTEM ERROR IN DETECTOR ONLY MODE........................................................79 5.3.4. TRACKER PERFORMANCE RATE ...............................................................................................81 5.3.5. FRAME RATE................................................................................................................................82 5.3.6. TRACKING LATENCY..................................................................................................................83 5.3.7. STATIC JITTER AND DRIFT.........................................................................................................85 5.3.8. DYNAMIC JITTER.........................................................................................................................85 5.3.9. STATIC PRECISION ......................................................................................................................86 5.3.10. RESOLUTION..............................................................................................................................87 6. REAL-TIME CAMERA-BASED FACE DETECTION USING A MODIFIED LAMSTAR NEURAL NETWORK SYSTEM................................................................................................................89 6.1. INTRODUCTION.....................................................................................................................................89 6.2. ORIGINAL FACE DETECTOR DESCRIPTION......................................................................................90 6.3. BACKGROUND.......................................................................................................................................91 6.4. SYSTEM OVERVIEW ..............................................................................................................................93 6.5. THE KOHONEN SELF-ORGANIZING-MAP........................................................................................93
Recommended publications
  • Scalable Multi-View Stereo Camera Array for Real World Real-Time Image Capture and Three-Dimensional Displays
    Scalable Multi-view Stereo Camera Array for Real World Real-Time Image Capture and Three-Dimensional Displays Samuel L. Hill B.S. Imaging and Photographic Technology Rochester Institute of Technology, 2000 M.S. Optical Sciences University of Arizona, 2002 Submitted to the Program in Media Arts and Sciences, School of Architecture and Planning in Partial Fulfillment of the Requirements for the Degree of Master of Science in Media Arts and Sciences at the Massachusetts Institute of Technology June 2004 © 2004 Massachusetts Institute of Technology. All Rights Reserved. Signature of Author:<_- Samuel L. Hill Program irlg edia Arts and Sciences May 2004 Certified by: / Dr. V. Michael Bove Jr. Principal Research Scientist Program in Media Arts and Sciences ZA Thesis Supervisor Accepted by: Andrew Lippman Chairperson Department Committee on Graduate Students MASSACHUSETTS INSTITUTE OF TECHNOLOGY Program in Media Arts and Sciences JUN 172 ROTCH LIBRARIES Scalable Multi-view Stereo Camera Array for Real World Real-Time Image Capture and Three-Dimensional Displays Samuel L. Hill Submitted to the Program in Media Arts and Sciences School of Architecture and Planning on May 7, 2004 in Partial Fulfillment of the Requirements for the Degree of Master of Science in Media Arts and Sciences Abstract The number of three-dimensional displays available is escalating and yet the capturing devices for multiple view content are focused on either single camera precision rigs that are limited to stationary objects or the use of synthetically created animations. In this work we will use the existence of inexpensive digital CMOS cameras to explore a multi- image capture paradigm and the gathering of real world real-time data of active and static scenes.
    [Show full text]
  • The Role of Camera Convergence in Stereoscopic Video See-Through Augmented Reality Displays
    (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 9, No. 8, 2018 The Role of Camera Convergence in Stereoscopic Video See-through Augmented Reality Displays Fabrizio Cutolo Vincenzo Ferrari University of Pisa University of Pisa Dept. of Information Engineering & EndoCAS Center Dept. of Information Engineering & EndoCAS Center Via Caruso 16, 56122, Pisa Via Caruso 16, 56122, Pisa Abstract—In the realm of wearable augmented reality (AR) merged with camera images captured by a stereo camera rig systems, stereoscopic video see-through displays raise issues rigidly fixed on the 3D display. related to the user’s perception of the three-dimensional space. This paper seeks to put forward few considerations regarding the The pixel-wise video mixing technology that underpins the perceptual artefacts common to standard stereoscopic video see- video see-through paradigm can offer high geometric through displays with fixed camera convergence. Among the coherence between virtual and real content. Nevertheless, the possible perceptual artefacts, the most significant one relates to industrial pioneers, as well as the early adopters of AR diplopia arising from reduced stereo overlaps and too large technology properly considered the camera-mediated view screen disparities. Two state-of-the-art solutions are reviewed. typical of video see-through devices as drastically affecting The first one suggests a dynamic change, via software, of the the quality of the visual perception and experience of the real virtual camera convergence,
    [Show full text]
  • 3D-Con2017program.Pdf
    There are 500 Stories at 3D-Con: This is One of Them I would like to welcome you to 3D-Con, a combined convention for the ISU and NSA. This is my second convention that I have been chairman for and fourth Southern California one that I have attended. Incidentally the first convention I chaired was the first one that used the moniker 3D-Con as suggested by Eric Kurland. This event has been harder to plan due to the absence of two friends who were movers and shakers from the last convention, David Washburn and Ray Zone. Both passed before their time soon after the last convention. I thought about both often when planning for this convention. The old police procedural movie the Naked City starts with the quote “There are eight million stories in the naked city; this has been one of them.” The same can be said of our interest in 3D. Everyone usually has an interesting and per- sonal reason that they migrated into this unusual hobby. In Figure 1 My Dad and his sister on a keystone view 1932. a talk I did at the last convention I mentioned how I got inter- ested in 3D. I was visiting the Getty Museum in southern Cali- fornia where they had a sequential viewer with 3D Civil War stereoviews, which I found fascinating. My wife then bought me some cards and a Holmes viewer for my birthday. When my family learned that I had a stereo viewer they sent me the only surviving photographs from my fa- ther’s childhood which happened to be stereoviews tak- en in 1932 in Norwalk, Ohio by the Keystone View Com- pany.
    [Show full text]
  • 3D Frequently Asked Questions
    3D Frequently Asked Questions Compiled from the 3-D mailing list 3D Frequently Asked Questions This document was compiled from postings on the 3D electronic mail group by: Joel Alpers For additions and corrections, please contact me at: [email protected] This is Revision 1.1, January 5, 1995 The information in this document is provided free of charge. You may freely distribute this document to anyone you desire provided that it is passed on unaltered with this notice intact, and that it be provided free of charge. You may charge a reasonable fee for duplication and postage. This information is deemed accurate but is not guaranteed. 2 Table Of Contents 1 Introduction . 7 1.1 The 3D mailing list . 7 1.2 3D Basics . 7 2 Useful References . 7 3 3D Time Line . 8 4 Suppliers . 9 5 Processing / Mounting . 9 6 3D film formats . 9 7 Viewing Stereo Pairs . 11 7.1 Free Viewing - viewing stereo pairs without special equipment . 11 7.1.1 Parallel viewing . 11 7.1.2 Cross-eyed viewing . 11 7.1.3 Sample 3D images . 11 7.2 Viewing using 3D viewers . 11 7.2.1 Print viewers - no longer manufactured, available used . 11 7.2.2 Print viewers - currently manufactured . 12 7.2.3 Slide viewers - no longer manufactured, available used . 12 7.2.4 Slide viewers - currently manufactured . 12 8 Stereo Cameras . 13 8.1 Currently Manufactured . 13 8.2 Available used . 13 8.3 Custom Cameras . 13 8.4 Other Techniques . 19 8.4.1 Twin Camera . 19 8.4.2 Slide Bar .
    [Show full text]
  • Changing Perspective in Stereoscopic Images
    1 Changing Perspective in Stereoscopic Images Song-Pei Du, Shi-Min Hu, Member, IEEE, and Ralph R. Martin Abstract—Traditional image editing techniques cannot be directly used to edit stereoscopic (‘3D’) media, as extra constraints are needed to ensure consistent changes are made to both left and right images. Here, we consider manipulating perspective in stereoscopic pairs. A straightforward approach based on depth recovery is unsatisfactory: instead, we use feature correspon- dences between stereoscopic image pairs. Given a new, user-specified perspective, we determine correspondence constraints under this perspective, and optimize a 2D warp for each image which preserves straight lines and guarantees proper stereopsis relative to the new camera. Experiments verify that our method generates new stereoscopic views which correspond well to expected projections, for a wide range of specified perspective. Various advanced camera effects, such as dolly zoom and wide angle effects, can also be readily generated for stereoscopic image pairs using our method. Index Terms—Stereoscopic images, stereopsis, perspective, warping. F 1 INTRODUCTION stereoscopic images, such as stereoscopic inpainting HE renaissance of ‘3D’ movies has led to the [6], disparity mapping [3], [7], and 3D copy & paste T development of both hardware and software [4]. However, relatively few tools are available com- stereoscopic techniques for both professionals and pared to the number of 2D image tools. consumers. Availability of 3D content has dramati- Perspective plays an essential role in the appearance cally increased, with 3D movies shown in cinemas, of images. In 2D, a simple approach to changing the launches of 3D TV channels, and 3D games on PCs.
    [Show full text]
  • Distance Estimation from Stereo Vision: Review and Results
    DISTANCE ESTIMATION FROM STEREO VISION: REVIEW AND RESULTS A Project Presented to the faculty of the Department of the Computer Engineering California State University, Sacramento Submitted in partial satisfaction of the requirements for the degree of MASTER OF SCIENCE in Computer Engineering by Sarmad Khalooq Yaseen SPRING 2019 © 2019 Sarmad Khalooq Yaseen ALL RIGHTS RESERVED ii DISTANCE ESTIMATION FROM STEREO VISION: REVIEW AND RESULTS A Project by Sarmad Khalooq Yaseen Approved by: __________________________________, Committee Chair Dr. Fethi Belkhouche __________________________________, Second Reader Dr. Preetham B. Kumar ____________________________ Date iii Student: Sarmad Khalooq Yaseen I certify that this student has met the requirements for format contained in the University format manual, and that this project is suitable for shelving in the Library and credit is to be awarded for the project. __________________________, Graduate Coordinator ___________________ Dr. Behnam S. Arad Date Department of Computer Engineering iv Abstract of DISTANCE ESTIMATION FROM STEREO VISION: REVIEW AND RESULTS by Sarmad Khalooq Yaseen Stereo vision is the one of the major researched domains of computer vision, and it can be used for different applications, among them, extraction of the depth of a scene. This project provides a review of stereo vision and matching algorithms, used to solve the correspondence problem [22]. A stereo vision system consists of two cameras with parallel optical axes which are separated from each other by a small distance. The system is used to produce two stereoscopic pictures of a given object. Distance estimation between the object and the cameras depends on two factors, the distance between the positions of the object in the pictures and the focal lengths of the cameras [37].
    [Show full text]
  • OSCAM - Optimized Stereoscopic Camera Control for Interactive 3D
    OSCAM - Optimized Stereoscopic Camera Control for Interactive 3D Thomas Oskam 1;2 Alexander Hornung 2 Huw Bowles 3;4 Kenny Mitchell 3;4 Markus Gross 1;2 1 ETH Zurich 2 Disney Research Zurich 3 Black Rock Studio 4 Disney Interactive Studios OSCAM uncontrolled c Disney Enterprises, Inc. Figure 1: Two stereoscopic shots of the camera moving towards objects. Our method keeps a constant target depth range when moving close to the objects. Uncontrolled stereoscopy, in contrast, can cause large disparities and destroy stereoscopic perception. Abstract 1 Introduction This paper presents a controller for camera convergence and inter- Stereoscopic content creation, processing, and display has become axial separation that specifically addresses challenges in interactive a pivotal element in movies and entertainment, yet the industry is stereoscopic applications like games. In such applications, unpre- still confronted with various difficult challenges. Recent research dictable viewer- or object-motion often compromises stereopsis due has made substantial progress in some of these areas [Lang et al. to excessive binocular disparities. We derive constraints on the 2010; Koppal et al. 2011; Didyk et al. 2011; Heinzle et al. 2011]. camera separation and convergence that enable our controller to Most of these works focus on the classical production pipeline, automatically adapt to any given viewing situation and 3D scene, where the consumer views ready-made content that has been op- providing an exact mapping of the virtual content into a comfort- timized in (post-) production to ensure a comfortable stereoscopic able depth range around the display. Moreover, we introduce an experience. See Tekalp et al. [2011] for an overview.
    [Show full text]
  • Stereo Fundus Photography: Principles and Technique
    68 Journal of Ophthalmic Photography Vol 18 No. 2 October 1996 Stereo Fundus Photography: Principles and Technique Marshall E. Tyler, CRA, FOPS Editors note: The following is an excerpt from a In stereo fundus photography, two Images are created chapter in the forthcoming book by Patrick J. Same, photographically and, when viewed, become fused in M.Ed., CRA, FOPS, and Marshall E. Tyler, CRA, the brain. 1 When you view the images, your left eye FOPS, Ophthalmic Photography, A Textbook of Retinal views the left image, your right eye views the right Photography, Angiography, and Electronic Imaging, image, and your brain then recreates the depth rela- Boston: Butterworth-Heinemann, 1997 (available tionships that were observed at the time of photogra- October 1996). This up-to-date comprehensive man- phy. If you have created the stereo photographs by a ual covers the classic topics of fundus photographic reproducible technique, they may also permit addi- principles, as well as thoroughly describes electronic tional diagnostic interpretation on a follow-up visit. imaging and the recent advances in Indocyanine Green Many ophthalmic photographers routinely expose Angiography . As evidenced by the two case reports in- all fundus images in stereo. Two exposures are avail- cluded in this issue of The Journal, and the strong able in case an image is not perfect, and the exposures showing of 3-D imagery at the annual competition, also provide extra information if the images are a good stereo photography is a unique and important topic to stereo pair. Accept the challenge: Photograph every ophthalmic photographers. fundus in stereo and you will find—literally—a new dimension in fundus photography.
    [Show full text]
  • An Illustrated Description of the View-Master Personal
    STEREO PICTURES IN THIS MOUNT WERE NOT TAKEN BY VIEW-MASTER: AN ILLUSTRATED DESCRIPTION OF THE VIEW-MASTER PERSONAL STEREO SYSTEM by Jamie Powell Sheppard A thesis presented to Ryerson University in partial fulfilment of the requirements for the degree of Master of Arts in the Program of Film and Photographic Preservation and Collections Management Toronto, Ontario, Canada, 2016 © Jamie Powell Sheppard 2016 Author’s Declaration I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I authorize Ryerson University to lend this thesis to other institutions or individuals for the purpose of scholarly research. I further authorize Ryerson University to reproduce this thesis by photocopying or by other means, in total or in part, at the request of other institutions or individuals for the purpose of scholarly research. I understand that my thesis may be made electronically available to the public. ii Stereo Pictures in this Mount Were Not Taken by View-Master: An Illustrated Description of the View-Master Personal Stereo System Jamie Powell Sheppard Master of Arts, 2016 Film and Photographic Preservation and Collections Management Ryerson University Abstract The View-Master is a beloved toy, well-known to many. However, most people are unaware of the View-Master Personal Stereo system for creating one’s own View-Master Personal Reels – an unusual combination of vernacular imagery and three-dimensional photography. Unfortunately, little has been written about this system, and the institutional collections of Personal Reels are limited.
    [Show full text]
  • Trends in S3D Movies Quality As Evaluated on 105 Movies and 10
    ©2016 Society for Imaging Science and Technology DOI: 10.2352/ISSN.2470-1173.2016.5.SDA-439 Trends in S3D-Movie Quality Evaluated on 105 Films Using 10 Metrics Dmitriy Vatolin, Alexander Bokov, Mikhail Erofeev, Vyacheslav Napadovsky; Lomonosov Moscow State University, Moscow, Russia Abstract 240 In this paper we present a large-scale analysis of S3D-movie 180 technical quality spanning a large portion of stereoscopic-cinema history. We evaluated 105 Blu-ray 3D releases, including titles 120 like the 1954 classic Dial M for Murder, as well as contemporary stereoscopic productions like Life of Pi and The Great Gatsby. 60 The analysis is based on objective quality metrics designed to de- Number of detected scenes 0 tect specific types of artifacts, including swapped channels, in- 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 2.1 consistencies between the stereoscopic views (color, sharpness Temporal offset (frames) Figure 1: Distribution of temporal offsets between the left and and geometric as well as temporal asynchrony) and many oth- right views for all detected temporal-asynchrony cases in 105 S3D ers. The main challenges we had to overcome were the enormous movies. amount of computational resources and disk space that such anal- yses require as well as algorithmic difficulties in developing some of the more advanced objective quality metrics. Our study clari- ing to either vertical parallax if the object is moving vertically fies the quality trends and problems of S3D movie production in or to conflicting depth cues if the object is moving horizontally.
    [Show full text]
  • Disparity Remapping for Handheld 3D Video Communications
    DISPARITY REMAPPING FOR HANDHELD 3D VIDEO COMMUNICATIONS Stephen Mangiat and Jerry Gibson Electrical and Computer Engineering University of California, Santa Barbara, CA 93106 smangiat, gibson @ece.ucsb.edu { } ABSTRACT light from the desired depth). However, when viewing im- agery on a stereoscopic display, this link is broken because With the advent of glasses-free autostereoscopic handheld the eyes must always focus light at the distance of the dis- displays, a number of applications may be enhanced by 3D play, regardless of their convergence angle. This disconnect perception, particularly mobile video communications. On is a main source of discomfort and eye fatigue [2], and it is handheld devices, front-facing stereo cameras can capture the inherent to all stereoscopic systems that use planar screens. two views necessary for 3D display. However, the short dis- The discomfort caused by this “vergence-accomodation tance between the user and cameras introduces large dispar- conflict” can be mitigated by shrinking the stereo baseline to ities and other stereoscopic challenges that have tradition- limit disparities, which unfortunately means reducing the 3D ally plagued close-up stereo photography. Maintaining both effect. When the stereo baseline is too small, the resultant viewer comfort and 3D fusion under varying conditions is video will exhibit the “cardboard cutout” effect, and will not therefore a technological priority. In this paper, we discuss capture 3D structure within the user’s face [1]. In this case, the main stereoscopic concerns of handheld 3D video com- stereo cameras would fail to enhance immersion. munications and outline a new post-processing technique to In Sec.
    [Show full text]
  • Stereo Vision Based Depth of Field Rendering on a Mobile Device
    Best Student Paper Award Stereo Vision based Depth of Field Rendering on a Mobile Device Qiaosong Wang, Zhan Yu, Christopher Rasmussen, and Jingyi Yu University of Delaware, Newark, DE 19716, USA ABSTRACT The Depth of Field (DoF) effect is a useful tool in photography and cinematography because of its aesthetic values. However, capturing and displaying dynamic DoF effect was until recently a quality unique to expensive and bulky movie cameras. In this paper, we propose a computational approach to generate realistic DoF effects for mobile devices such as tablets. We first calibrate the rear-facing stereo cameras and rectify stereo image pairs through FCam API, then generate a low-res disparity map using graph cuts stereo matching and subsequently upsample it via joint bilateral upsampling. Next we generate a synthetic light field by warping the raw color image to nearby viewpoints according to corresponding values in the upsampled high resolution disparity map. Finally, we render dynamic DoF effect on the tablet screen with light field rendering. The user can easily capture and generate desired DoF effects with arbitrary aperture sizes or focal depths using the tablet only with no additional hardware or software required. The system has been tested in a variety of environments with satisfactory results. Keywords: Depth of Field, Programmable Cameras, Joint Bilateral Upsampling, Light Field 1. INTRODUCTION Dynamic Depth of field (DoF) effect is a useful tool in photography and cinematography because of its aesthetic values. Capturing and displaying dynamic DoF effect was until recently a quality unique to expensive and bulky movie cameras. Problems such as radial distortion may also arise if the lens system is not set up properly.
    [Show full text]