Computer Generation of Integral Images Using Interpolative Shading Techniques
Total Page:16
File Type:pdf, Size:1020Kb
Computer Generation of Integral Images using Interpolative Shading Techniques Graham E. Milnthorpe A thesis submitted for the degree of Doctor of Philosophy School of Engineering and Manufacture, De Montfort University, Leicester December 2003 IMAGING SERVICES NORTH Boston Spa, Wetherby West Yorkshire, LS23 7BQ www.bl.uk The following has been excluded at the request of the university Pages 151 -164 Summary i Research to produce artificial 3D images that duplicates the human stereovision has been ongoing for hundreds of years. What has taken millions of years to evolve in humans is proving elusive even for present day technological advancements. The difficulties are compounded when real-time generation is contemplated. The problem is one of depth. When perceiving the world around us it has been shown that the sense of depth is the result of many different factors. These can be described as monocular and binocular. Monocular depth cues include overlapping or occlusion, shading and shadows, texture etc. Another monocular cue is accommodation (and binocular to some extent) where the focal length of the crystalline lens is adjusted to view an image. The important binocular cues are convergence and parallax. Convergence allows the observer to judge distance by the difference in angle between the viewing axes of left and right eyes when both are focussing on a point. Parallax relates to the fact that each eye sees a slightly shifted view of the image. If a system can be produced that requires the observer to use all of these cues, as when viewing the real world, then the transition to and from viewing a 3D display will be seamless. However, for many 3D imaging techniques, which current work is primarily directed towards, this is not the case and raises a serious issue of viewer comfort. Researchers worldwide, in university and industry, are pursuing their approaches in the development of 3D systems, and physiological disturbances that can cause nausea in some observers will not be acceptable. The ideal 3D system would require, as minimum, accurate depth reproduction, multi viewer capability, and all-round seamless viewing. The necessity not to wear stereoscopic or polarising glasses would be ideal and lack of viewer fatigue essential. Finally, for whatever the use of the system, be it CAD, medical, scientific visualisation, remote inspection etc on the one hand, or consumer markets such as 3D video games and 3DTV on the other, the system has to be relatively inexpensive. Integral photography is a ‘real camera’ system that attempts to comply with this ideal; it was invented in 1908 but due to technological reasons was not capable of being a useful autostereoscopic system. However, more recently, along with advances in • • Summary 11 technology, it is becoming a more attractive proposition for those interested in developing a suitable system for 3DTV. The fast computer generation of integral images is the subject of this thesis; the adjective ‘fast’ being used to distinguish it from the much slower technique of ray tracing integral images. These two techniques are the standard in monoscopic computer graphics whereby ray tracing generates photo-realistic images and the fast forward geometric approach that uses interpolative shading techniques is the method used for real-time generation. Before this present work began it was not known if it was possible to create volumetric integral images using a similar fast approach as that employed by standard computer graphics, but it soon became apparent that it would be successful and hence a valuable contribution in this area. Presented herein is a full description of the development of two derived methods for producing rendered integral image animations using interpolative shading. The main body of the work is the development of code to put these methods into practice along with many observations and discoveries that the author came across during this task. Acknowledgments in I would like to express my thanks to all the people in the 3D and Biomedical Imaging Group of De Montfort University for the support they have given me during this research. Especial thanks to Malcolm McCormick for the encouragement he has given me and the trust he has shown in abilities that I did not realise I had. Thanks to Neil Davies for his constant determination for perfection in the computer generation of the integral images. Finally, thanks to Matt Forman for revealing to me the mysteries of Unix and the difficult job of keeping the systems up and running. Funding for the research in chronological order came from The Defence and Research Agency (DERA), a contract (LAIRD) under the European Link/EPSRC photonics initiative, and DTI/EPSRC sponsorship within the PROMETHEUS project. Contents IV Summary........................................ i-ii Acknowledgments........................... iii Contents........................................... iv-vi List of Figures............................ vii-ix List of Tables..................................x Nomenclature..................................xi-xii 1 Introduction 1*23 1.1 Discussion.......................................................................................... 1 1.2 3D imaging systems.......................................................................... 2 1.3 Directionality.........................................................................................4 1.4 Multiview.............................. 5 1.5 Integral imaging.......................................... 11 1.6 Computer generation of integral images......................................... 16 1.6.1 Standard ray tracing................................................................17 1.6.2 Integral ray tracing.................................................................. 18 1.6.3 Standard interpolate shading.................................................19 1.6.4 Integral interpolate shading..................................... 21 1.7 Forward geometric integral projection development criteria. 21 2 Forward Geometric Projection Capture Methods 24-43 2.1 Introduction........................................................................................ 24 2.2 Ray mesh pattern produced on playback............................................ 24 2.2.1 Simplifications of mesh representation and beam spread.. 24 2.2.2 Aperture planes and their distance from lens array............ 27 2.2.3 Problems of using light-ray mesh for a capture technique. 28 2.2.4 Variable perspective............................................................. 32 2.3 Variable perspective models............................................................... 34 2.3.1 3D-from-2D model (pinhole model)........................................34 2.3.2 Lens array model (finite-sized aperture model)...................35 2.4 Effect of moving the aperture..............................................................37 2.5 Conclusions......................................................................................... 43 3 Forward Projection Pinhole Mesh Model 44-60 3.1 Introduction................................... ' .................................................44 3.2 Aperture and lens array positioning........................... 45 3.3 3D line drawing algorithms...............................................................46 3.3.1 Optimum spacing between points representing a line. 46 3.3.2 Producing lines of equidistant points in 3D space. .49 3.4 Lens array coverage of points at different depths.............................. 53 3.4.1 Calculating the coverage.........................................................53 3.4.2 Finding the pinholes within the coverage............ ..................55 3.5 Capturing object points at the image plane.........................................57 3.6 Conclusions..........................................................................................59 Contents V 4 Forward Projection Finite-Sized Aperture Mesh Model 61-75 4.1 Introduction......................................................................................... 61 4.2 Refraction at locations on the lenslets................................................ 62 4.3 Backface culling................................................................................66 4.4 Optical aberrations.............................................................................. 67 4.4.1 Diffraction....................................................... 67 4.4.2 Spherical aberration and defocus............................................69 4.5 Integer and non-integer number of pixels per lens.............................70 4.6 Conclusions......................................................................................... 74 5 Forward Projection Finite-Sized Aperture Rendering Model 76-109 5.1 Introduction............................................................. 76 5.2 Object file format....................................................... 78 5.3 Implementation modes. ................................................................. 79 5.4 Parameters.................................................... 80 5.5 A lens array system matrix....................................................... 81 5.6 Initial scene set-up and animation.................................................... 84 5.7 Trace and shade............................................. 85 5.7.1 Object and colour-block transformations............................... 85 5.7.2 Normals..................... 87 5.7.3 Ambient and diffuse components. ..............................